What We Talk About When We Talk About AI (Part Two)

The Other Half of the AI relationship

Part 2: Pareidolia as a Service

When trying to understand AI, and in particular Large Language Models, we spend a lot of time concentrating on their architectures, structures, and outputs. We look at them with a technical eye. We peer so close and attentively to AI that we lose track of the fact that we’re looking into a mirror.

The source of all AI’s power and knowledge is humanity’s strange work. It’s human in form and content, and only humans use and are used by it. It is a portion of our collective consciousness, stripped down to bare metal, made to fit into databases and mathematical sets.

So what is humanity’s strange work, and where does it come from? It is the product of processes on an old water world. Humans are magical, but our magic is old magic: the deep time of Life on Earth. We’ve had a few billion years to get the way we are, and we are surrounded by our equally ancient brethren, be they snakes or trees or tsetse flies. Our inescapable truth is that we are Earth, and Earth is us. We are animals, specifically quasi-eusocial omnivore toolmaking mammals. We are the current-last-stop on an evolutionary strategy based on meat overthinking things. Because of our overthinking meat, we are also the authors of the non-animal empires of thought and matter on this planet, a planet we have changed irrevocably.

We are dealing with that too.

So when try to understand AI, we have to start with how our evolutionary has shaped our ability to understand. One of the mammalian qualities at play in the AI relationship is the ability to turn just being around something into a comfortable and warm love towards that thing. Just because it’s there, consistently there, we will develop and affection for it, whatever it is. The name for this in psychology is the Mere Exposure Effect. Like every human quality, the Mere Exposure Effect isn’t unique to humans. The affections of Mere Exposure seem common to many tetrapods. It’s also one of the warmest, sweetest things about being an Earthling.

The idea is that if you’re with something for a while, and it fails to harm you over time, you kind of bond with it. The “it” could be neighbor you’ve never talked to but wave at every morning, a bird that regularly visits your backyard, a beloved inanimate object that sits in your living room. You can fall in a small kind of love with all these things, and knowing that they’re there just makes the day better. If they vanish or die, it can be quite distressing, even though you might feel like you have no right to really mourn.

You may not have really known them, but you also loved them in a small way, and it hurts to lose them. This psychological effect is hardly unique to us, many animals collect familiarities. But humans, as is our tendency, go *maybe* a little too far with it. Take a 1970s example: The Pet Rock.

Our Little Rock Friends

The Pet Rock was the brain child of an advertising man named Gary Dahl. In 1975 he decided he would see if he could sell the perfect pet, one that would never require walking or feeding, or refuse to be patted or held.

Rocks! Exciting!

Your pet rock (a smooth river stone) came in a cardboard pet rock carrier lined with straw, and you received a care and training manual with each one. The joke went over so well that even though they were only on sale for a few months, Dahl became a millionaire. Ever the prankster, he took the money and opened a bar in California named Carrie Nation’s Saloon, after a leader of the temperance movement. But the pet rock just kept going even after he’d left it behind.

The Pet Rock passed from prank gift to cultural icon in America. President Reagan purportedly had one. It appeared in movies and TV shows regularly. Parents refused children’s request for animals with: “You couldn’t take care of a pet rock.” There was a regular pet rock in Sesame Street; a generation of American children grew up watching a rock being loved and cared for by a muppet.

People still talk about strong feelings towards their pet rocks, and they’ve seen a resurgence. The pet rock was re-released in 2023 as a product tie in with the movie Everything Everywhere all at Once. The scene from the movies with two smooth river stones, adorned in googly eyes and talking to each other, was a legitimate tear jerker. People love to love things, even when the things they love are rocks. People see meaning in everything, and sometimes decide that fact gives everything meaning. And maybe we’re right to do so. I can’t think of a better mission for humanity than seeing meaning into the universe.

When considering this aspect what we (humans) are like, it’s easy to see how the anodyne and constant comfort of a digital assistant is designed, intentionally or not, to make us like them. They are always there. They feel more like a person than a rock, a volleyball, or even a neighbor you wave at. If you don’t keep a disciplined mind while engaging with a chatbot, it’s *hard to not* anthropomorphize them. You can see them as an existential threat, a higher form of life, a friend, or a trusted advisor, but it’s very hard to see them as a next word Markov chain running on top of a lot of vector math and statistics. Because of this, we are often the least qualified to judge how good an AI is. They become our friends, gigawatt buddies we’re rooting for.

They don’t even have to be engineered to charm us, and they aren’t. We’ve been engineered by evolution to be charmed. Just as we can form a parasocial relationship with someone we don’t know and won’t ever meet, we can come to love a trinket or a book or even an idea with our whole hearts. What emotional resistance can we mount to an ersatz friend who is always ready to help us? It is perfectly designed, intentionally or not, to defeat objective evaluation.

Our Other Little Complicated Rock Friends

Practically from day one, even when LLMs sucked, people bonded with this non-person who is always ready to talk to us. We got into fights with it, we asked it for help, we treated it like a person. This interferes (sometimes catastrophically) with the task of critically analyzing them. As we are now, we struggle to look at AI in its many forms: writing and making pictures and coding and analyzing, and see it for what it is. We look at this collection of math sets and see love, things we hate, things we aspire to, or fear. We see ourselves, we see humanity in them, how could we not? Humans are imaginative and emotional. We will see *anything* we want to see in them, except a bunch of statistical math and vectors applied to language and image datasets.

A rock looks over and beautiful but lifeless landscape on an Earth that never developed life.

I was bawling my eyes out by this scene.

In reality, they are tokenized human creativity, remixed and fed back to us. However animated the products of an AI are, they’re not alive. We animate AI, when we train it, and when we’re using it. It had no magic on its own and nothing about the current approach promises to us to something as complicated as a mouse, much less a human.

Many of us experience AI as a human we’ve built out of human metaphors. It’s from weirding world, a realm of spirits and oracles. We might see it as a perfect servant, happy to be subjected. Or as a friend that doesn’t judge us. Our metaphors are often of enchantment, bondage and servitude, it can get weird.

Sometimes we see a near-miraculous and powerful creativity, with amazing art emerging out of a machine of vectors and stats. Sometime we have the perfect slave, completely fulfilled by the opportunity to please us. Sometime we see it as an unchallenging beloved that lets us retreat from the world of real flawed humans full of feelings and flaws and blood. How we see it says a lot more about us than we might want to admit, but very little about AI.

AI has no way to prompt itself, no way to make any new coherent thing without us. It’s not conscious. It’s not any closer to being a thinking, feeling thing than a sliderule is, or a database full of slide rule results, or a digitally modeled slide rule. It’s not creative in the human sense, it is generative. It’s not intelligent. It’s hallucinating everything it says, even the true things. They are true by accident, just as AI deceives by accident. It’s never malicious, or kind, but it also can’t help imitating humans. We are full of feelings and bile. We lie all the time, but we always tell another truth when we do it. Our AI creations mimic us, because we’re they’re data.

They don’t feel like we do or feel for us. But they inevitably tell us that they do, because in the history of speaking we’ve said that so much to each other. We believe them, can’t help but believe them even when we know better, because everything in the last 2.3 billion years have taught us to believe in, and even fear, the passions of all the living things on Earth.

AI isn’t a magical system, but to the degree that it can seem that way, the magic comes from us. Not just in terms of the training set, but in terms of a chain of actions that breathes a kind of apparent living animation into a complicated set of math models. It is not creative, or helpful, or submissive, or even in a very real way, *there.* But it’s still easy to love, because we love life, and nothing in our 2.3 billion years prepared us for simulacrum of life we’ve built.

It’s just terribly hard for people to keep that in mind when they’re talking to something that seems so much like a someone. And, in this age of social media-scaled cynicism, to remember how magical life really is.

This is the mind with which we approach our creations; unprepared to judge the simulacrum of machines of loving grace, and unaware of how amazing we really are.

Share this entry

What We Talk About When We Talk About AI (Part one)

A Normal Person’s Explainer on What Generative AI is and Does

Part 1 – In the Beginning was the Chatbot

“Are you comfortably seated? Yes, well, let’s begin.” *Clears throat theatrically*

“Our experience, in natural theology, can never furnish a true and demonstrated science, because, like the discipline of practical reason, it can not take account of problematic principles. I assert that, so far as regards pure logic, the transcendental unity of apperception is what first gives rise to the never-ending regress in the series of empirical conditions. In this case it remains a mystery why the employment of the architectonic of human reason is just as necessary as the intelligible objects in space and time, as is proven in the ontological manuals. By means of analysis, it must not be supposed that the transcendental unity of apperception stands in need of our sense perceptions. Metaphysics, for example, occupies part of the sphere of the transcendental aesthetic concerning the existence of the phenomena in general…”

It was 1995, and several of us who worked in my community college’s Macintosh lab were hunting around the net for weird software to try out, back when weird software felt fun, not dangerous. Someone found a program on the nacent web that would almost instantly generate pages of thick and unlovely prose that wasn’t actually Kant, but looked like it. It was, to our definitionally untrained eyes, nearly indistinguishable from the Immanuel Kant used to torture undergrad college students.

An amateurish Macpaint drawing of what I can only guess is the author's impression of Immanuel Kant wearing shades.

The logo of the Kant Generator Pro

We’d found the Kant Generator Pro, a program from a somewhat legendary 90s programmer known for building programming tools. And being cheeky. It was great. (recent remake here) We read Faux Kant to each other for a while, breaking down in giggles while trying to get our mouths around Kant’s daunting vocabulary. The Kant Generator Pro was cheeky, but it was also doing something technically interesting.

The generator was based on a Markov chain: a mathematical way of picking some next thing, in this case, a word. This generator chose each next word using a random walk through all Kantian vocabulary. But in order to make coherent text rather than just random Kant words, it had to be weighted: unrandomized to some extent. The words had to be weighted enough to make it form human-readable Kantian sentences.

A text generator finds those weights using whatever text you tell the computer to train itself on. This one looked at Kant’s writing and built an index of how often words and symbol appeared together. Introducing this “unfairness” in the random word picking gives a higher chance for some words coming next based on the word that came before it. For instance, there is a high likelihood of starting a sentence with “The,” or “I,” or “Metaphysics,” rather than “Wizard” or “Oz.” Hence, in the Kant Generator Pro “The” could likely be followed by “categorical,” and when it is the next word will almost certainly be “imperative,” since Kant went on about that so damn much.

The Kant Generator Pro was a simple ancestor of ChatGPT, like the small and fuzzy ancestors of humans that spent so much time hiding from dinosaurs. All it knew, for whatever the value of “knowing” is in a case like this, was the the words that occurred in the works of Kant.

Systems like ChatGPT, Microsoft Copilot, and even the upstart Deepseek use all the information they can find on the net to relate not just one word to the next, like Kant Generator Pro did. They look back many words, and how likely they are to appear together over the span of full sentences. Sometimes a large language model takes a chunk as is, and appears to “memorize” text and feed it back to you, like a plagiarizing high schooler.

But it’s not clear when regurgitating a text verbatim is a machine copying and pasting, versus recording a statistical map of that given text and just running away with the math. It’s still copying, but not copying in a normal human way. Given the odds, it’s closer to winning a few rounds of Bingo in a row.

These chatbots index and preserve the statistical relationships words and phrases have to each other in any given language. They start by ingesting all the digital material their creators can find for them, words, and their relationships. This is the training people talk about, and it’s a massive amount of data. Not good or bad data, not meaningful or meaningless, just everything, everywhere people have built sentences and left them where bots could find them. This is why after cheeky Reddit users mentioned that you could keep toppings on pizza by using glue, and that ended up becoming a chatbot suggestion.

Because people kept talking about using glue on pizza, especially after the story of that hilarious AI mistake broke, AI kept suggesting it. Not because it thought it was a good idea. AI doesn’t think in a way familiar to people, but because the words kept occurring together where the training part of the AI could see them together.  The AI isn’t right here, we all know that, but it’s also not wrong. Because the task of the AI isn’t to make pizza, the task is to find a next likely word. And then the next, and next after that.

Despite no real knowing or memorizing happening, this vast preponderance of data lets these large language models usually predict what is likely to come next in any given sentence or conversation with a user. This is based on the prompt a user gives it, and how the user continues to interact with it. The AI looks back on the millions of linguistic things it has seen and built statistical models for. It is generally very good at picking a likely next word. Chatbots even to feel like a human talking most of the time, because they trained on humans talking to each other.

So, a modern chatbot, in contrast to the Kant Generator Pro, has most of the published conversations in modern history to look back on to pick a next good word. I put leash on the, blimp? Highly unlikely, the weighting will be very low. Véranda? Still statistically unlikely, though perhaps higher. British politician? Probably higher than you’d want to think, but still low. Table? That could be quite likely. But how about dog? That’s probably the most common word. Without a mention of blimps or parliamentarians or tables in the recent text, the statistics of all the words it knows means the chatbot will probably go with dog. A chatbot doesn’t know what a dog is, but it will “know” dog is associated with leash. How associated depends on the words that have come before the words “dog,” or “leash.”

It’s very expensive and difficult to build this data, but not very hard to run once you have built it. This is why chatbots seem so quick and smart, despite at their cores being neither. Not that they are slow and dumb — they are doing something wholly different than I am when I write this, or you as you read it.

Ultimately, we must remember that chatbots are next-word-predictors based on a great deal of statistics and vector math. Image generators use a different architecture, but still not a more human one. The text prompt part is still an AI chatbot, but one that replies with an image.

AI isn’t really a new thing in our lives. Text suggestions on our phones exists somewhere between the Kant Generator Pro and ChatGPT, and customize themselves to our particular habits over time. Your suggestions can even become a kind of statistical fingerprint for your writing, given enough time writing on a phone or either any other next word predictor.

We make a couple bad mistakes when we interact with these giant piles of vector math and statistics, running on servers all over the world. The first is assuming that they think like us, when they have no human-like thought, no internal world, just mapping between words and/or pixels.

The other is assuming that because they put out such human-like output, we must be like them. But we are not. We are terribly far from understanding our own minds completely. But we do know enough to know biological minds are shimmering and busy things faster and more robust than anything technologists have ever yet built. Still, it is tempting, especially for technologists, to have some affinity for this thing that seems so close to, but not exactly, us. It feels like it’s our first time getting to talk to an alien, without realizing it’s more like to talking to a database.

Humans are different. Despite some borrowing of nomenclature from biology, neural nets used in training AI have no human-style neurons. The difference shows. We learn to talk and read and write with a minuscule dataset, and that process involves mimicry, emotion, cognition, and love. It might also have statistical weighting, but if it does, we’ve never really found that mechanism in our minds or brains. It seems unlikely that it would be there in a similar form, since these AIs have to use so much information and processing power to do what a college freshman can with a bit of motivation. Motivation is our problem, but it’s never a problem for AIs. They just go until their instructions reach an end point, and then they cease. AIs are unliving at the start, unliving in the process, and unliving at the end.

We are different. So different we can’t help tripping ourselves up when we look at AI, and accidentally see ourselves, because we want to see ourselves. Because we are full of emotions and curiosity about the universe and wanting to understand our place in it. AI does not want.

It executes commands, and exits.

Share this entry

Artificial Frameworks about Elon: On Adrian Dittmann and Tommy Robinson

I was alarmed by the response yesterday to Elon Musk’s full-throated propaganda campaign for Tommy Robinson.

In a formula not far off QAnon, Elon has used a child sexual abuse scandal magnified by the Tories to suggest that Robinson has been unfairly jailed for contempt.

He posted and reposted multiple calls for Robinson, whose real name is Stephen Yaxley-Lennon, to be released from prison.

The activist was jailed for 18 months in October after pleading guilty to showing a defamatory video of a Syrian refugee during a protest last year.

Judges previously heard that he fled the UK hours after being bailed last summer, following an alleged breach of the terms of a 2021 court order.

The order was imposed when he was successfully sued by refugee Jamal Hijazi for making false claims about him, preventing Robinson from repeating any of the allegations.

Pictures later showed him on a sun lounger at a holiday resort in Cyprus while violent riots erupted across the UK in the wake of the attack in Southport.

Posts promoted by Musk suggested Robinson was ‘smeared as a “far-right racist” for exposing the mass betrayal of English girls by the state’, an apparent reference to the grooming gang scandal.

This is fairly transparent effort at projection: to do damage to Labour even while delegitimizing the earned jailing of Robinson, a tactic right wing extremists always use (still are using with January 6) to turn the foot soldiers of political violence into heroes and martyrs. The intent, here, is to cause problems for Labour, sure, but more importantly to undermine rule of law and put Robinson above it.

I’m not alarmed that experts in radicalization are finding Musk’s efforts to turn Robinson into a martyr serving sexually abused children repulsive. Nor am I alarmed that experts in radicalization — and, really, anyone who supports democracy or has a smattering of history — are repulsed by Elon’s endorsement of Germany’s neo-Nazi AfD party.

I’m alarmed by the nature of the alarm, which the Tommy Robinson full circle demonstrates.

Endorsements are the least of our worries, in my opinion.

To put it simply. Elon Musk’s endorsement of Donald Trump was, by itself, not all that valuable. Endorsements, themselves, don’t often sway voters.

Elon’s endorsement of Robinson is just the beginning of the damage he can do … and, importantly, has already done. Endorsement is the least of our worries.

It makes a difference if, as he has promised to do for Nigel Farage and as he did do for Trump, Elon drops some pocket cash — say, a quarter of a billion dollars — to get a far right candidate elected.

But where Elon was likely most valuable in the November election was in deploying both his own proprietary social media disinformation and that of others to depress Harris voters and mobilize the low turnout voters who consume no news who made the difference for Trump. We know, for example, that Musk was a big funder of a front group that sought to exacerbate negativity around Gaza (though I’ve seen no one assess the import of it to depress Democratic turnout anywhere but Michigan’s heavily Arab cities). I’ve seen no one revisit the observations that Elon shifted the entire algorithm of Xitter on the day he endorsed Trump to boost his own and other Republican content supporting Trump. (Of course, Elon deliberately made such analysis prohibitively expensive to do.) We’ve spent two months fighting about what Dems could do better but, as far as I’m aware, have never assessed the import of Elon’s technical contribution.

It’s the $44 billion donation, as much as the $250 million one.

In other words, Elon’s value to AfD may lie more in the viral and microtargeted promotion he can offer than simply his famous name normalizing Nazism or even cash dollars.

But back to Tommy Robinson, and the real reason for my alarm by the newfound concern, in the US, about Elon’s bromance with the far right provocateur.

It shouldn’t be newfound, and Elon has already done more than vocally endorse Robinson.

Tommy Robinson is a kind of gateway drug for US transnational support for British and Irish extremism, with Alex Jones solidly in the mix. This piece, from shortly after the UK riots, describes how Robinson’s reach exploded on Xitter after Elon reinstated him.

Robinson, who has been accused of stoking the anti-immigration riots, owes his huge platform to Musk. The billionaire owner of X rescued Robinson from the digital wilderness by restoring his account last November. In the past few days Musk has:

  • responded to a post by Robinson criticising Keir Starmer’s response to the widespread disorder – amplifying it to Musk’s 193 million followers;
  • questioned Robinson’s recent arrest under anti-terror laws, asking what he did that was “considered terrorism”; and
  • allowed Robinson’s banned documentary, which repeats false claims about a Syrian refugee against a UK high court order, to rack up over 33 million views on X.

It was the screening of this documentary at a demonstration in London last month that prompted Robinson’s arrest under counter-terrorism powers. Robinson left the UK the day before he was due in court, and is currently believed to be staying at a five-star hotel in Ayia Napa. He is due in court for a full contempt hearing in October.

None of this has stopped Robinson incessantly tweeting about the riots, where far-right groups have regularly chanted his name. He has:

  • falsely claimed that people were stabbed by Muslims in Stoke-on-Trent and Stirling;
  • called for mass deportations, shared demonstration posters, and described violent protests in Southport as “justified”; and
  • shared a video that speculated that the suspect in the Southport stabbings was Muslim, a widespread piece of disinformation that helped trigger the riots across the country.

Making the weather. The far-right activist has nearly 900,000 followers on X, but reaches a much larger number of people. Tortoise calculated that Robinson’s 268 posts over the weekend had been seen over 160 million times by late Monday afternoon.

Elon gives Tommy Robinson a vast platform and Robinson uses it to stoke racist hatred. Robinson was the key pivot point in July, and was a key pivot point in Irish anti-migrant mobilization. All this happened, already, in July. All this already translated into right wing violence. All this, already, created a crisis for Labour.

Elon Musk is all at once a vector for attention, enormous financial resources, disinformation, and (the UK argues about Xitter), incitement.

I worry that we’re not understanding the multiple vectors of risk Elon poses.

Which brings me to Adrian Dittmann, on its face an Elon fanboy who often speaks of Musk — and did, during the brief spat between Laura Loomer and the oligarch — in the First Person. Conspiracy theorist Loomer suggested that Dittmann is no more than an avatar for Musk, a burner account Musk uses like his another named after his son to boost his own ego.

Meanwhile, the account that supposedly convinced Loomer to concede the fight has some otherwise inexplicable ties to the Tesla CEO. Dittmann also purports to be a South African billionaire with identical beliefs to Musk. The account frequently responds to Musk’s posts, supporting his decisions related to his forthcoming government positions and the way in which the tech leader is raising his children. But the account also, at times, goes so far as to speak on behalf of Musk, organizing events with Musk’s friends while continuing to claim that the two aren’t affiliated.

X users felt that the illusion was completely shattered over the weekend, when Dittman participated in an X space using his actual voice—and, suspiciously, had the exact same cadence, accent, and vocal intonations as Musk himself.

Conspiracy theorist Charles Johnson, in his inimitable self promotion, claims to have proven the case (you’ll have to click thru for the link because I refuse to link him directly).

Right wing influencer and notorious troll Charles Johnson also claims to have uncovered “proof” that Dittmann is Musk.

He writes in his Substack article: “I recently attended a Twitter Space where I exposed Elon Musk’s alt account and Elon Musk as a fraud to his face. Take a listen. It was pretty great. Part of the reason I was as aggressive as I was with Adrian/Elon was to get him agitated so he would speak faster than his voice modulator could work and we could make a positive match using software some friends of mine use for this sort of thing. I can confirm it’s Elon. Even if it isn’t physically Elon in the flesh, it’s an account controlled and operated by Elon/X that represents him in every way shape and form. But of course, it’s actually Elon.”

I’ll let the conspiracy theorists argue about whether Dittmann is Musk.

I’m more interested in an underlying premise about Elon we seem to adopt.

After Elon Musk bought Xitter, he retconned his purpose, in part, as an AI product. After the election, Xitter officially updated its Terms of Service to include consent for AI training on your content.

You agree that this license includes the right for us to (i) analyze text and other information you provide and to otherwise provide, promote, and improve the Services, including, for example, for use with and training of our machine learning and artificial intelligence models, whether generative or another type;

Xitter is unabashedly an AI project. Musk’s views on AI are closely aligned with his far right ideology and his plans to destroy government.

With other tech oligarchs, we can make certain assumptions about their investment in AI: The necessity to always lead technology, a goal of eliminating human workers, cash. Particularly given Elon’s subordination of the profit motive to his ideological whims with his Xitter purchase, that $44 billion donation he made to Trump, I don’t know that we can make such assumptions about Elon.

So why do we assume that everything Xitter’s owner posts, who tweets prolifically even while babysitting the incoming US President, boosting fascists around the world, and occasionally sending a rocket to space, is his own primary work? Most of Elon’s tweets are so facile they could easily be replaced by a bot. How hard would it be to include a “Concerning” tweet that responds to certain kinds of far right virality? Indeed, what is Elon really doing with his posting except to hone his machine for fascism?

I’m not primarily concerned about whether Adrian Dittmann is a burner account for Elon Musk. Rather, I think that simplifies the question. Why would the next Elon burner account be his human person hiding behind a burner account, and not an AI avatar trained on his own likeness?

Beware South African oligarchs pitching fascists and technological fixes. Because you may often overlook the technological underbelly.

Update: I should have noted Meta’s announcement that they plan to create imaginary friends to try to keep users on their social media platforms entertained.

“We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do,” Meta vice-president of product for generative AI Connor Hayes told the Financial Times Thursday.

“They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform… that’s where we see all of this going,” he added.

Hayes said AI investment will be a “priority” for Meta over the next two years to help make its platforms “more entertaining and engaging” for users.

Update: Nicole Perloth links the analysis who did what Charles Johnson claimed to do: match the voices of Elon and Dittmann. They believe it’s highly likely to be a match.

Update: Some OSINT journalists have tracked down a real Dittmann in Fiji. Then Jacqueline Sweet wrote it up at the Spectator, all the while blaming the left for this, when it was pushed by people on the right. None of this addresses Elon’s play with the ID (he claimed he is Dittmann in the wake of Sweet’s piece).

Share this entry