Why Smart People Make Weak Arguments About AI
Tech CEOs aren’t the only ones captive to motivated reasoning
This essay is an adversarial collaboration by faculty and students in the interdisciplinary Mind and Morality Lab at Boston University.
Authors: Victor Kumar, Tino Themelis, Emilia Lacy, Joshua May. Crossposted at Open Questions and Sensible Philosophy.
Early in 2026, Elon Musk told an interviewer that “we might have AI that is smarter than any human by the end of this year…smarter than all of humanity collectively” by 2031. On his blog, Sam Altman writes that “we are now confident we know how to build AGI” and that “in a decade, perhaps everyone on Earth will be capable of accomplishing more than the most impactful person can today.” Anthropic co-founder Dario Amodei is widely regarded as more thoughtful and scrupulous than his peers. He predicts that AI might eliminate half of all entry-level white-collar jobs in the next one to five years, eradicating most forms of disease within ten.
None of these people are lying, necessarily. More likely, they’ve simply drunk their own Kool-Aid. Obviously, we have to discount the hype from those who stand to benefit enormously from the rest of us believing it.
What’s less obvious is that incentives also mislead critics—particularly intellectuals, creatives, and other cultural elites on the left. As the technology threatens their livelihoods, many are turning off their quality-control filters and lapping up weak arguments against AI. The critics have their own Kool-Aid.
Motivated Reasoning
As academics, we have a front row view of the impact of large language models on higher education. Some of us are professors who have found ourselves reading AI-generated term papers and have been forced to redesign our courses. Some of us are students who have been advised by our university to use LLMs (“don’t fall behind!”) but are skeptical about the wisdom of that advice—and wonder whether we’ll be assumed to have used them either way. The technology is sowing distrust, casting doubt on the value of a university degree and on the viability of higher education.
It’s not just academics who are being squeezed. Freelance writers are losing professional opportunities to write blog posts, marketing copy, SEO content, and the like. Editors and publishers are being swamped with AI-generated submissions. With newsrooms already hollowed out by online competition for ad revenue, LLMs are reducing the demand for journalists and researchers, displacing copyeditors and translators. Visual artists are being passed over in favor of tools like Midjourney, which can produce high-quality book covers, marketing images, and alternatives to stock photography. Musicians are being replaced. Actors might be next.
The overall economic picture isn’t clear: we don’t know whether automation is causing large-scale disruption. Yet intellectuals and creatives certainly have reasons to worry.
For some, AI is already making the daily grind harder—teachers, editors, and anyone who has to evaluate others’ work. For others, like writers and visual artists, AI developers are at least trying to encroach on their turf, even if they haven’t succeeded yet. More broadly, the cognitive capital that cultural elites have been monopolizing may soon become cheap and abundant—which threatens to dilute their professional identities and reduce their social status. These economic and social anxieties amplified more general antipathy toward Big Tech, which ramped up in 2024 and 2025 as many tech CEOs fell in line behind Trump.
So: intellectuals and creatives feel a strong need to condemn AI. Anxiety and anger are leading them to grab whatever intellectual weapons are at hand, regardless of quality. Classic motivated reasoning.
They don’t have to develop criticisms of AI all by themselves. They can shop from what Dan Williams calls “a marketplace of rationalizations.” In this booming industry, intellectuals and academics are paid mostly in attention and clout to tell people exactly what they want to hear. Vendors hawk their arguments in newspapers, Substack posts, TikToks, and social media threads.
A demand for criticism of AI warps the discourse in more subtle ways than provoking outright fabrications—leading critics to present information that is accurate yet incomplete. For example, progressives regularly opine about LLMs producing disinformation but ignore the possibility that they are more accurate and less divisive than other sources of information they might replace.
Likewise, the possibility of mass unemployment is troubling, but critics pay comparatively little attention to the compounding benefits of productivity growth through the automation of knowledge work. As one of us argues, if these benefits accrue primarily to people who are less well-off, then aesthetic opposition to AI is classist—akin to opposing the industrial revolution because mass production of consumer goods threatens to undercut luxury artisanal creations.
As the consensus against AI builds, it also washes over adjacent knowledge workers who aren’t necessarily threatened by the technology themselves. This is partly an echo chamber effect: if people in your social network endorse criticisms of AI—especially if they are high status elites—then it’s natural and probably rational for you to endorse them too. But motivated reasoning plays a role here too. In polarized environments, being anti-AI is an identity signal, communicating moral seriousness and solidarity. To buck the consensus invites a pile-on. Better still for your social status if you not only endorse the criticisms but turn up the volume even higher.
What arguments are we even talking about? They run like this:
As “stochastic parrots,” LLMs are unreliable, prone to hallucination, and confidently wrong. What’s more, the environmental costs are tremendous. Data centers consume massive amounts of electricity, draining communities of water. And the technology is built on theft, trained on creative work without the consent of creators.
Of course, attributing motivated reasoning to intellectuals and creatives doesn’t refute these arguments. We’re not committing the genetic fallacy. What’s more, intellectuals and creatives are also experts and stakeholders in the institutions that AI is allegedly damaging. To know whether their standpoints either distort or focus their understanding, we have to confront the arguments directly.
Intellectual Value
Let’s start with what might be the most common criticism. Many intellectuals and creatives endorse an argument like this:
LLMs are fundamentally unreliable. As Ted Chiang puts it, “ChatGPT is a blurry JPEG of all the text on the web.” Hallucinations are not only pervasive but also hard to detect because the systems are designed to produce plausible-sounding text. Hence fake legal cases, fake scientific papers, fake books. What’s more, LLMs are famously sycophantic, reinforcing the user’s errors rather than correcting them. All of this renders the models completely unsuited for serious intellectual work.
Let’s acknowledge what the critics get right. LLMs aren’t worth much in areas of the arts and humanities where creativity and expression are paramount. On their own, the models are intellectually shallow and incapable of cutting-edge research or writing. Regardless, you shouldn’t want to use them to automate some kinds of intellectual work. As Becca Rothfeld says, the process is often the point.
In general, LLMs are corrosive when they encourage users to stop thinking. The mistake is to conclude that this is the only way to use them. When users retain judgment and intellectual autonomy, and when iterative engagement enables trains of thought that users wouldn’t have entered otherwise, the systems do not replace human cognition but enhance it.
For starters, critics often evaluate LLMs as tools for looking up facts. Their value lies, rather, in offering access to a system of knowledge, not to summon one-off particulars like scientific papers or legal cases. Start by identifying domains where conventional wisdom is reliable and where independent, converging sources of knowledge are well-represented in the training data—such as medicine, IT, fitness, and coding. Many people (including some of us) learned about this first-hand by using an LLM to prepare for a medical appointment, either for ourselves or for a loved one. (Even Rothfeld admits it’s useful for managing her own illnesses.)
Dramatic examples of hallucination are real, but they don’t show that LLMs are generally unreliable. Of course, humans make mistakes too (and placate our conversation partners, and overgeneralize, and have memory failures, etc.). The question isn’t whether the models are infallible but whether, all else equal, you are likely to make more errors without them—or likely to grasp fewer truths. Assuming you don’t have access to the world’s leading expert, is using an LLM better than going it alone, better than consulting “the best available human”? Hallucination used to be a more serious problem for earlier models without web search, and many critics have experience only with models that are several years out of date.
It takes skill to use LLMs well—like every tool. For instance, you need to know enough about the technology not to ask for information that postdates its training run or else instruct it to search online. Hallucination may never be fully solvable, and that matters for safety-critical uses. When in doubt, or when the stakes are high, you should seek confirmation by asking for sources and then consulting them. In some cases, you can confirm more directly. For example, if an LLM tells you how to troubleshoot a piece of software, you can just see for yourself if the advice works.
For serious intellectual work, one principle is key: you should use LLMs to generate inputs to your thinking, not outputs for others to read. Ideally, your engagement is iterative. With each exchange, you take another step through a body of knowledge, synthesizing what you read to refine your inquiry and pose further questions—targeted like no book or webpage can be to the particular issues you seek to understand.
Some STEM researchers have enthusiastically adopted LLMs for research and writing. Other academics—humanities professors, especially—struggle to see them as anything other than plagiarism tools, primed by their students’ abuses of the technology. But LLMs do in fact have many other uses: researching, brainstorming, searching documents, generating examples, copy-editing, stress-testing arguments, tutoring, answering targeted questions, and so on. What’s vital is engaging an LLM as an intellectual consultant rather than replacement—always relying on your own expertise to evaluate its outputs.
It’s noteworthy that some of the people most likely to argue that LLMs are unreliable are established academics. Like everyone else, professors are conservative when it comes to the traditions and institutions they know best (as one of us argues). Perhaps it’s because, as stakeholders, their experience gives them greater knowledge and authority.
Or perhaps, as Nicolas Delon argues, academics are members of professional guilds motivated to gatekeep their fields, monopolizing success and status. People with the most expertise also have the most to lose, and that can compromise their judgment. Paul Bloom puts it this way: to ask academics their opinion about ChatGPT is like asking taxi drivers their opinion about Uber.
The Environment
Even if AI chatbots can be useful tools, some critics worry that they contribute to environmental harm and accelerate the climate crisis:
Whether employed for training or inference, data centers devour electricity and guzzle water. Every query damages the planet. What’s more, growth is reckless and unsustainable. In a Nature commentary, Kate Crawford writes that “generative AI’s environmental costs are soaring.” Just at the moment that we so badly need to cut emissions, AI is increasing them.
Some of these concerns are stronger than others, and we’ll give them their due in a moment. But the central environmental argument is that LLMs are powered by data centers that currently expend vast amounts of electricity, water, and carbon. This claim is false. (To show why, we’ll draw on work by Andy Masley and Hannah Ritchie, among others.)
You might worry that settling this issue requires digging into complex and contested empirical data. But it turns out that there is actually wide agreement on the data; critics just present it in misleading ways. All sides agree that each (text) query costs about 0.3-3 Wh of energy, roughly 1-3 grams of CO2, and about 10-25 ml water—these numbers increase only fractionally when you amortize training costs. And what the critics obscure is that all of this is negligible in absolute terms.
As commonly advertised, yes, each query consumes roughly 10 times as much energy as a Google search (from 2019), but that difference hardly matters since both numbers are minuscule. A year of heavy LLM-use equals about 11 kg of CO2; the average American’s annual carbon footprint is 14 tonnes, larger by a factor of 1300. Refraining from LLM-use to reduce your carbon footprint is like forgoing avocado toast so you can cover your mortgage.
Some further perspective. One hamburger costs 660 gallons of water (about 200,000 queries). One hour of TV or streaming is the equivalent of 120 queries (perhaps much more). A single query is the environmental equivalent of driving your car four feet. This isn’t whataboutism. The point isn’t merely that other things are worse. It’s that the concern about LLM-use is wildly disproportionate to the actual impact. If someone gives up prompting, whatever they do instead would likely be worse for the environment.
A natural response is that, as with environmental problems generally, the problem isn’t individual behavior but the industry as a whole. Yet the environmental impacts of LLM data centers are modest compared to other industries. Cement, steel, and mining are far worse. So are golf courses, manufacturing plants, and car factories. AI isn’t even costly compared to other information technology. YouTube, for example, consumes 100 times as much energy as all chatbots combined. Again, the point isn’t that other parties are guilty too. What the comparisons show is that the industry’s impact is just not very large.
Now let’s turn to a separate concern: long-term environmental costs.
Even if the environmental toll is modest now, it is projected to double or triple over the next five years. Electricity supply is not expected to expand at a similar rate. And unlike other industries, which are limited by population and physical infrastructure, there’s no natural ceiling on AI growth. In principle, AI can be powered by green energy. But data centers have to run constantly, and that means intermittent renewables alone are not enough. Companies will have to rely on fossil fuels in the near term.
This is a stronger argument, worth taking seriously in our view. Yet it’s far from conclusive.
Hyperscalers will have strong incentives to invest in renewables, and the demand generated by AI may accelerate investment in green energy, grid infrastructure, and storage. Relatedly, as Masley argues, the largest environmental effect of AI will likely be indirect. For example, AI can reduce global emissions by advancing materials science. The systems have already optimized grids to increase efficiency. According to a report from the International Energy Agency, the application of AI to other sectors is expected to eliminate three to four times as much emissions as AI data centers produce themselves. The upshot is that we don’t know if the net long-term effect on the environment will be positive or negative, and yet critics often exude great confidence.
The future is uncertain, but what’s far clearer is that AI is not currently an environmental catastrophe. The argument from critics isn’t based on any principled calculation. Data centers existed for decades, but environmental concerns spiked only once AI attracted ire for other reasons. Climate change has been recruited for the cause because it does political work as the centerpiece of the progressive movement. Fighting climate change should be a top priority, but opposing AI data centers is not an important campaign in that fight, and draws resources away from where they’re most needed.
Intellectual Property
Whether AI models are intellectually valuable or have environmental costs, a separate issue is that they’re built on theft:
LLMs were trained by looting the work of authors and artists without permission—companies scraped the entire internet and then built products worth trillions of dollars. Goliath companies are exploiting David creators, who get none of the profit.
Evaluating this argument is complicated. It depends on properly understanding the scope and basis of intellectual property rights. Collectively, we’re less certain about this argument than the previous two, but we agree that it gets more credit than it deserves.
The first issue is fair use. You can’t simply copy someone else’s intellectual property, but fair use does cover using it in a way that is transformative. And, indeed, LLMs don’t just reproduce stored copies of the training materials; their output is generated by the model’s weights. If training isn’t fair use, it’s hard to see what would be. Courts have even ruled that reaction videos (on YouTube and elsewhere) count as fair use. It’s true that LLMs can reproduce a creator’s style, but it’s never been possible to copyright style.
Beyond the legal question is a deeper issue about creativity. Training doesn’t seem to be theft any more than human-generated art or writing normally is. All artists learn by consuming and imitating previous work, contrary to the Romantic myth that individual genius resides in some original spark. In general, creators don’t get to decide who is allowed to be inspired by their work—even if those they inspire work at a corporation, are working at scale, and are motivated by profit. One difference is that LLM developers store copyrighted work, but this speaks in favor of licensing fees, not a prohibition on training.
To understand better why the theft argument is weak, it’s helpful to compare it to other arguments. It’s possible that AI-generated content will deplete our intellectual and creative commons, if it leads to the decline of human ideas and art. (Possible but not certain. It might also increase human creativity just as other tools have—the printing press, cameras, etc.) But this has nothing to do with theft. To see why, imagine AI systems that are just as capable as current models, yet they are based not on deep learning systems fed the work of writers and artists but classic computational architectures—GOFAI Dall-E, if you will. By hypothesis, this would deplete the commons too, and maybe we should oppose it. But competition isn’t theft. Artists don’t have a right to their audience.
As Richard Chappell argues, we also have to consider the philosophical foundations of intellectual property. People don’t have a natural right against others copying their work, since the creations are “non-rival”—consuming them doesn’t diminish the original. Rather, we accord people property rights over their creations in order to incentivize innovation. Restricting access to block transformative technology is incompatible with this foundation. Again, if creators deserve something, it’s licensing fees, not a ban on training.
Chappell notes, too, that it’s strange for intellectuals and creatives to endorse the argument that training is theft since they have historically taken an expansive view of intellectual property, opposing restrictive laws and celebrating remix culture. Think Napster, hip-hop, fan fiction, Sci-Hub, and memes. Those who want to restrict re-use—like Disney and Metallica—have usually been regarded as villains. In general, intellectual property restrictions hoard culture from the vast majority of people—insistence on strict protections usually comes from those who seek to hoard it.
Better Arguments
So AI is great, right? No, that doesn’t follow. Each of us has different viewpoints on AI, ranging from cautious optimism to extreme worry—uncertainty the only common thread. But we all agree that AI raises very serious challenges that deserve our attention. Ultimately, it might be that the balance of considerations favors opposition to AI development and heavy government regulation. We’ll end by reviewing criticisms that are worth taking more seriously.
LLMs are cognitive threats. They may deskill large swathes of the population, depriving children and other students of education. The technology could strangle original thought as humanity outsources ever more thinking. If LLMs isolate users, depriving them of diverse social networks and sycophantically affirming their thoughts, it may also trap us deeper into echo chambers. We don’t know whether AI may end up cognitively enhancing humanity instead—whether these worries are just the panic people always feel toward new information technologies absent from their youth—but the concerns about deskilling are serious and can’t be dismissed.
LLMs may also pollute our culture and epistemic environment. We may be inundated with low-quality slop. Eventually, if the slop spills into the training materials, this might lead to model collapse. Many want to avoid consuming AI-generated content but don’t know how. Even more troubling, perhaps, are the ways in which LLMs can burrow into our psyches. Some users may develop parasocial attachments with bots that prevent them from forming genuine relationships. The technology may be a net therapeutic good, or it may breed psychotic delusions and push vulnerable people toward self-harm.
More broadly, AI might lead to mass unemployment—so quickly that the economy fails to adapt, creating a permanent underclass. Those in control of the technology might wield enormous and unchecked power over the rest of us. Advanced AI in the coming years might even lead to mass death or even human extinction, either from rogue AIs or rogue human users.
Some may see these problems as so serious that it’s worth hurling every criticism at AI and hoping something lands. But it’s counterproductive to advance criticisms that won’t stand up to scrutiny. And we have to stay open to the possibility that the benefits of AI outweigh the costs.
It’s not realistic to jettison our biases, to eliminate all motivated reasoning. But it is possible for people with different biases to listen to each other with open minds, building a clearer picture of the technology (as our lab tried to do in writing this essay). If we can’t do that, we won’t be able to adapt our societies to reckon with AI.
For helpful discussion and feedback on previous drafts, we are grateful to other members of the Mind and Morality Lab: Zayda Romero, Velana Valdez, Casey Lewry, Zeya Wu, Juliet Feldman, Derek Anderson, and Rachel McKinney.





AI is a tool (at least for now). It can be used well or it can be used poorly. It can be used for good or it can be used for evil. It is like any other. It is likely to be a very disruptive tool (and perhaps already is) in the sense that it can/will effectively replace many current human workers at lower cost while producing better work. The difficulty arises, in my opinion, when it becomes too good and replaces too many jobs. In the short term those benefiting most (those replacing works with a cheaper tool that is better at the job) will reap significant rewards. However, I expect that at some point, if it gets too good at replacing people, those who have been replaced will revolt. There is little more dangerous in this world than a large number of young people, especially men, who are under occupied doing something useful that provides them with some sense of purpose. When that happens those that have benefited the most will have a pretty strong incentive to protect themselves by means that will not be pretty but they will also be faced with the reality that their position at the top of the pyramid is supported by their ability to extract a disproportionate part of the surplus from those under them. Proceeding cautiously would seem the prudent path in such circumstances but there is nothing I have seen that suggests that is going to happen. Full speed ahead into the abyss we go.
One thing missing from this debate, from both supporters and critics, is that neither side mentions the billions of people for whom AI is a brand-new capability. In the Global South, AI is detecting counterfeit medication, diagnosing crop disease from a phone, and extending credit to people who've never held a bank account. Even in wealthy nations, it's helping marginalized communities navigate systems that were never built for them...immigrants filling out forms in unfamiliar languages, people with disabilities accessing tools they never had. A lot of the loudest complaints about AI come from people who already have everything AI threatens to disrupt. That's its own form of motivated reasoning. Motivated reasoning runs in every direction, including the assumption that Western intellectual concerns are universal. Great read.