A friend recently informed me that a US passport costs $2,000. He said this with confidence because two social media influencers he follows also said it with confidence. They, however, were repeating AI-generated slop designed to ferment unrest against the USA’s Safeguard American Voter Eligibility (SAVE) Act. The actual cost of a first time US passport is $165. Renewal is $130. And a passport card is $65 first time and $30 renewal.
The passport story was part of a wider AI blitz claiming that the SAVE Act will prevent tens of millions of US electors from being able to vote, since – wrongly – the AI claims that to register to vote, and to vote in an election, an elector must produce a valid passport and a birth certificate. The cost claim was designed to suggest that swathes of poorer Americans would be excluded, while the birth certificate claim implies that married women who changed their name would be denied a vote. And, of course, Democrats have long maintained that African Americans lack the ability to obtain a valid ID. So that, taken together, these “prove” that the SAVE Act is a step on the road to a fascist dictatorship.
One of the unpleasant side effects of calling everyone you disagree with a fascist is that pretty soon you find yourself surrounded by fascists. In this case, we would have to include a raft of countries which require voter ID in the same category, including: Argentina, Brazil, France (in towns over 1,000 residents), Germany, Greece, Hungary, Iceland, India, Israel, Italy, Mexico, Namibia, Netherlands, Norway, and the United Kingdom. Canada, Sweden and Switzerland also require ID, but are more flexible, accepting multiple forms of non-photo ID in lieu of a photo ID.
Nor is the USA as inflexible as the AI slop merchants insist. A passport, Citizenship-Marked REAL ID card (not all states have these yet) or a Place-of-Birth Photo ID count as stand alone proof of eligibility to vote. An alternative photo ID such as a driving licence must be backed up with a secondary document such as, but not restricted to, a birth certificate or a Hospital Record of Birth.
It is also worth noting that there is massive public support (84%) for voter ID in the USA – 95% Republicans and 71% Democrats.
This then, is just one recent example of the phenomenon ironically misattributed to Mark Twain, that:
“A lie can travel halfway around the world while the truth is still putting on its shoes.”
Obviously, internet lies predate the development of large language models. Back in the day though, they were easier to spot. Not least because most of us got most of our news from establishment media which could still afford to pay the wages of investigative journalists. But in the UK and the USA today, half of us now get our news from social media, with younger people less likely to use increasingly cash-starved and unreliable legacy media. And in the online space, there is more than just 24-hour news pressure driving content. The internet is rapidly being overwhelmed by AI-generated slop designed to solicit an emotional response for the sole purpose of gathering clicks. And despite the obvious clues, it seems that a large part of the population is unable to spot anything wrong.
For the most part, this seems to validate Cory Doctorow’s theory of “enshittification,” in which both the clickbait farms and the social media platforms benefit from capturing our attention… and, of course, bombarding us with advertising. But more recently, we have seen a growing trend of using AI-fakes to apparently validate one or other political position. Here in the UK, for example, while the recent release of more Epstein files has rocked the sitting government because of the close and longstanding connections between Epstein and former ambassador to the USA, Peter Mandelson, left-leaning activists were disappointed that there was no sign of scandal attaching to Reform UK’s Nigel Farage… and so, they made one up.
As with other prominent UK party leaders, Farage’s name only appears indirectly – 21 out of 35 times being in a single 2018 email conversation between Epstein and Steve Bannon relating to Brexit and similar political movements in Europe. And cooler heads have pointed out that there are plenty of valid reasons to criticise Farage without resorting to AI to fabricate more.
Meanwhile, with all of the unrest going on in Minnesota, several people popped up on my Facebook feed sharing an image of ICE officers illegally arresting civil rights attorney Sandra May Watkins in her own driveway… but there was something off about that image. It was just a little too defined and smooth to have been taken on a hand-held mobile phone. Sure enough, it turns out that there is no civil rights attorney called Sandra May Watkins, and that the whole thing was made up (as if anyone needs further incidents to discredit ICE in Minnesota).
As with my friend’s fake passport price, the folk sharing this AI slop likely did so in good faith, since it fitted in with their preexisting beliefs. And that’s the deeper problem because this confirmation bias – on all sides of politics – is helping to drown social media feeds in AI-slop to the point that nobody can be sure of what is true anymore. Certainly, there are various online tools that can check whether an image, video or text is AI-generated. But, let’s be honest, only an infinitesimal minority of us are going to take ten minutes to do so when it takes just a click to like or share it.
And adding to the problem, AI training is now treating the slop as real, as well as throwing up so-called “hallucinations” – instances where the AI simply makes things up. Even after attempts to clean up, top-tier AI like GPT-5 and Claude 4.1 are still hallucinating up to 10% of the time, while some models are spewing out hallucinations up to 88% of the time. And AI training is now taking these hallucinations as real and hallucinating hallucinations. As Rot Economy author Ed Zitron explains:
“In essence, the AI boom requires more high-quality data than currently exists to progress past the point we’re currently at, which is one where the outputs of generative AI are deeply unreliable. The amount of data it needs is several multitudes more than currently exists at a time when algorithms are happily-promoting and encouraging AI-generated slop, and thousands of human journalists have lost their jobs, with others being forced to create generic search-engine-optimized slop. One (very) funny idea posed by the Journal’s piece is that AI companies are creating their own ‘synthetic’ data to train their models, a ‘computer-science version of inbreeding’ that Jathan Sadowski calls Habsburg AI.
“This is, of course, a terrible idea. A research paper from last year found that feeding model-generated data to models creates ‘model collapse’ — a ‘degenerative learning process where models start forgetting improbable events over time as the model becomes poisoned with its own projection of reality.’”
Our predicament is not so much that individual stories are fake but rather that the overwhelming bombardment of AI slop forces us to default to treating everything as fake unless we can prove otherwise… a default position which will tear apart what remains of the social fabric. As Dr. Jun-E Tan at Khazanah Research Institute explains:
“With the proliferation of cheaply generated content, global elections in recent years have had to contend with AI-generated content. For example, in 2024, a year unusually concentrated with general elections globally, researchers tracked 50 elections and found that 80% of these elections had incidents connected to gen-AI, with 215 incidents logged. Out of these incidents, more than two thirds (69%) had “a harmful role” in the election.
“Politics and elections are therefore a fertile ground for AI-enabled disinformation campaigns. However, the more insidious effects of AI slop does not require the intent of deception and manipulation. With a communication and media environment overwhelmed with low quality, mass-generated content, the larger concern here is not that people will believe in false information, but that they will find it difficult to believe what they see and read at all.
“The uncertainty about the veracity of information can result in people defaulting into existing biases and identity politics, tearing apart the social fabric that enables inter-group engagement that builds consensus and a shared vision of the future. The general lack of public trust also undermines the legitimacy of journalistic content and democratic processes, further exacerbating the problem in a vicious cycle.”
Yet another reason why the bursting of the AI bubble can’t come soon enough.
As you made it to the end…
you might consider supporting The Consciousness of Sheep. There are seven ways in which you could help me continue my work. First – and easiest by far – please share and like this article on social media. Second follow my page on Facebook. Third follow my channel on YouTube. Fourth, sign up for my monthly e-mail digest to ensure you do not miss my posts, and to stay up to date with news about Energy, Environment and Economy more broadly. Fifth, if you enjoy reading my work and feel able, please leave a tip. Sixth, buy one or more of my publications. Seventh, support me on Patreon.