Long-term Language Misery

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

AI is irritatingly everywhere in news and discussions as I write this, like sand after a beach trip. Working in IT, I could hold forth on such issues as reliability, power consumption or how they’re really “Large Language Models” (Clippy on steroids). But I’d like to explore something that does not involve complaining about AI – hold your surprise.

Instead, I’d like to complain about people. What can I say, sometimes you stick with traditions.

As is often noted in critique of AI is they really are sort of advanced autocomplete, which is why I prefer the term Large Language Model (LLM). They don’t think or feel or have morals, anything we attribute to humans and intelligence. They just ape the behavior, delivering information and misinformation in a way that sounds human.

(Yeah, yeah it’s a talk about AI but I’m going to call them LLM. Live with it.)

However when I look at LLM bullshit, misinformation, and mistakes, something seems familiar. The pretend understanding, the blatant falsehood, the confident-sounding statements of utter bullshit. LLM’s remind me of every conspiracy theorist, conspirtualist, political grifter, and buy-my-supplement extremist. You could replace Alex Jones, TikTok PastelAnon scammers, and so on with LLMs – hell, we should probably worry how many people have already done this.

LLM’s are a reminder that so many of our fellow human beings spew lies no differently than a bunch of code churning out words assembled into what we interpret as real. People falling for conspiracy craziness and health scams are falling for strings of words that happen to be put n the right order. Hell, some people fall for their own lies, convinced by by “LLM’s” they created in their own heads.

LLM’s require us to confront many depressing things, but how we’ve been listening to the biological equivalent of them for so long has got to be up there.

I suppose I can hope that critique of LLMs will help us see how some people manipulate us. Certainly some critiques to call out conspiracy theories, political machinations, and the like. These critiques usually show how vulnerable we can be – indeed, all of us can be – to such things.

I mean we have plenty of other concerns about LLMs an their proper and improper place. But cleaning up our own act a certainly can’t hurt.

Steven Savage

AI: Same As We Never Admitted It Was

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

(I’d like to discuss Large Language Models and their relatives – the content generation systems often called AI.  I will refer to them as “AI” in quotes because they may be artificial, but they aren’t intelligent.)

Fears of “AI” damaging human society are rampant as of this writing in May of 2023.  Sure, AI-generated Pizza commercials seem creepily humorous, but code-generated news sites are raking in ad sales and there are semi-laughable but disturbing political ads.  “AI” seems to be a fad, a threat, and a joke at the same time.

But behind it all, even the laughs, is the fear that this stuff is going to clog our cultures with bullshit.  Let me note that bullshit has haunted human society for ages.

Disinformation has been with us since the first criminal lied about their whereabouts.  It has existed in propaganda and prose, skeevy gurus and political theater.  Humans have been generating falsehoods for thousands of years without computer help – we can just do it faster.

Hell, the reason “AI” is such a threat is that humans have a long history of deception and the skills to use it.  We got really good doing this, and now we’ve got a new tool.

So why is it so hard for people to admit that the threat of “AI” exists because of, well, history?

Perhaps some people are idealists.  To admit AI is a threat is to admit that there are cracks and flaws in society where propaganda and lies can slither in and split us apart.  Once you admit that you have to acknowledge this has always been happening, and many institutions and individuals today have been happily propagandizing for decades.

Or perhaps people really wanted to believe that the internet was the Great Solution to ignorance, as opposed to a giant collection of stuff that got half-bought out by corporations.  The internet was never going to “save” us, whatever that means.  It was just a tool, and we could have used it better.  “AI” isn’t going to ruin it – it’ll just be another profit-generating tool for our money-obsessed megacorporate system, and that will ruin things.

Maybe a lot of media figures and pundits don’t want to admit how much of their jobs are propaganda-like, which is why they’re easily replaced with “AI.”  It’s a little hard to admit how much of what you do is just lying and dissembling period.  It’s worse when a bunch of code may take away your job of spreading advertising and propaganda.

Until we admit that the vulnerabilities society has to “AI” are there because of issues that have been with us for a while, we’re not going to deal with them.  Sure we’ll see some sensationalistic articles and overblown ranting, but we won’t deal with the real issues.

Come to think of it, someone could probably program “AI” to critique “AI” and clean up as a sensationalist pundit.  Now that’s a doomsday scenario.

Steven Savage

AI Zombies Hide Your Faces

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

If I were to sum up tech news of 2022 it would be “Musk” and “AI Generation.”  Enough has been written about Musk, but the use of AI to generate art and text is still fresh and needs to be discussed.

AI Generation is soulless, and I think that has not been adequately explored.  In fact, its very soullessness explains the revulsion some people rightfully feel.  There’s hatred for the use of work, for non-compensation for artists, the chance of lost jobs, but also we’re disgusted to see creative works called creative when there’s “no one home.”

I’m reminded of the Doctor Who episode “Robots of Death” with the amazing Tom Baker.  Beyond being a murder mystery, it explored “robophobia,” rooted in the idea that surrounded by human-like but not human-emoting mechanical creatures is like facing the living dead.  The Doctor was talking about what we call “the uncanny valley” these days – human-yet-not.

That’s what AI is.  Shambling would-be-people, zombies, robots, no one home.  That’s part of why we’re disgusted – but it’s worse.

Consider work that we feel connected to – some of that intimacy is shared with the creator as well. We know someone is on the other end, with goals, a style, a way of doing things.  In turn, we have a sense of the person on the other end who did their work, or screwed up, or tried.  We need that sense of connection to understand, feel safe around someone, or at least yell at them.

Creative work – from music to a news article – works when there’s a person there.  We humans need to know we can trust (or at least find and criticize) the creator.

Now let’s consider works that are derivative or calculated.  That knock-off work, that engineered political screed, they’re irritating to us because we can feel the manipulation.  Someone is being false with us, there’s an estimation on what will trigger us or appeal to us.  They might not even be who they say they are.

The person creating it is less such works is less reliable to us – unless we want to believe them.  That’s our problem for wanting to believe them, of course.

Then there’s AI work which is all calculation and manipulation.  A bunch of programs running math churns out a request that has “all the right parts,” and we perceive them as having meaning.  There’s no bright idea or inspiration at the center, no human ideas, not even the assurance someone wants to con us.  There’s a pile of words or pixels creating the illusion of value.

AI gives us a shambling zombie writing dead prose, or a robot pushing buttons it was told to, without the honor of having someone to hate directly for it..  It cannibalizes other, meaningful work without caring and gives nothing in return.  It’s a simulation of a person bearing a bright idea or an understandable nightmare.

AI brings no human connection to the experience.  It’s an attempt to create empty content, an illusion of humanity with no one to know or trust or criticize.  It’s void of meaning except that which we accidentally give it because it didn’t mean anything to the creator.  It’s a trick made by an undead set of equations.

These zombies are being used to manipulate us to drive advertising and sales.  That horror you feel in your gut is warranted because people want to flood the internet with soulless crap, and it’s inhuman.

Your disgust is quite human – and warranted.

Steven Savage