Long-term Language Misery

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

AI is irritatingly everywhere in news and discussions as I write this, like sand after a beach trip. Working in IT, I could hold forth on such issues as reliability, power consumption or how they’re really “Large Language Models” (Clippy on steroids). But I’d like to explore something that does not involve complaining about AI – hold your surprise.

Instead, I’d like to complain about people. What can I say, sometimes you stick with traditions.

As is often noted in critique of AI is they really are sort of advanced autocomplete, which is why I prefer the term Large Language Model (LLM). They don’t think or feel or have morals, anything we attribute to humans and intelligence. They just ape the behavior, delivering information and misinformation in a way that sounds human.

(Yeah, yeah it’s a talk about AI but I’m going to call them LLM. Live with it.)

However when I look at LLM bullshit, misinformation, and mistakes, something seems familiar. The pretend understanding, the blatant falsehood, the confident-sounding statements of utter bullshit. LLM’s remind me of every conspiracy theorist, conspirtualist, political grifter, and buy-my-supplement extremist. You could replace Alex Jones, TikTok PastelAnon scammers, and so on with LLMs – hell, we should probably worry how many people have already done this.

LLM’s are a reminder that so many of our fellow human beings spew lies no differently than a bunch of code churning out words assembled into what we interpret as real. People falling for conspiracy craziness and health scams are falling for strings of words that happen to be put n the right order. Hell, some people fall for their own lies, convinced by by “LLM’s” they created in their own heads.

LLM’s require us to confront many depressing things, but how we’ve been listening to the biological equivalent of them for so long has got to be up there.

I suppose I can hope that critique of LLMs will help us see how some people manipulate us. Certainly some critiques to call out conspiracy theories, political machinations, and the like. These critiques usually show how vulnerable we can be – indeed, all of us can be – to such things.

I mean we have plenty of other concerns about LLMs an their proper and improper place. But cleaning up our own act a certainly can’t hurt.

Steven Savage

AI and Chatbots: Better Someone To Hate Than A Machine

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

AI and Chatbots are in the news as people want to use them for everything – well at least until reality sets in.  Now I don’t oppose Chatbots/AI or automated help with a humanized interface.  I think there’s potential for it that will make our lives better.  They really are spicy autocomplete and there’s a role for that, even if we all remember how we hated Clippy.

The problem is that there’s too many cases people want to use so-called AI just replace humans.  I think it will go wrong in many ways because we want people to connect to, even if only to hate them.

If you’ve ever screamed “operator” into a phone after navigating some impossible number-punch menu you have a good idea of how Chatbots could be received.

When we need help or assistance, we want to talk to a person.  Maybe it’s for empathy.  Maybe it’s to have someone to scream at.  Either way we want a moral agent to talk to someone we know has an inner life, and principles, even if we disagree with them.

There’s something antisocial about chatbots just replacing humans.  It breaks society and it breaks our need for contact (or blame).

Have you ever observed some horrible computer or mechanical failure?  Have you imagined or participated in the lawsuits?  Imagine how that will go with Chatbots.

Technology gives us the ability to do things on a huge level – but also create horrible disasters.  Imagine what Chatbots can automate – financial aid, scientific research, emergency advice.  Now imagine that going wrong on a massive, tech-enabled scale.  Technology let us turn simple things into horrible crises.

If you have people along the way in the process?  They can provide checks.  They can make the ethical or practical call.  But when it’s all bots doing bot things with bots and talking to a person?  There’s that chance of ending up in the news for weeks, in government hearings for months, and lawsuits for years. 

(Hell, removing Chatbots removes some poor schmuck to take the blame, and a few people with more money and sense might find they really want that.)

Have you ever read a book or commissioned art and enjoyed working with the artist?  Chatbots and AI can make art without that connection.  Big deal.

Recently I read a person grouse about the cost of hiring an artist to do something – when they could just go to a program.  The thing is for many of us, an artistic connection over literature or art or whatever is also about connecting with a person.

When we know a person is behind something we know there’s something there.  We enjoy finding the meaning in the book, the little references, the empathic bond we form with them.  An artist listens to us, understands us, brings humanity to the work we request.  It makes things real.

I read a Terry Pratchett book because it’s Terry Pratchett.  I watch the Drawfee crew as it’s Jacob, Nathian, Julia, and Karina who I like.

Chatbot-generated content may be interesting or inspiring, but it’s just math that we drape our feelings around.  AI generated content is just a very effective Rorschach blot.  There’s no one to admire, learn from, or connect with behind it.

Humanity brings understanding, security, checks, and meaning.

So however the Chatbot/AI non-Revolution goes?  I think it will be both overdone and underwhelming.  It will include big lawsuits and sad headshakes.  But ultimately if there’s an attempt to Chatbot/AI everything, it’ll be boring and inhuman.

Well, boring and inhuman if we know there’s chatbots there.  It’s the hidden ones that worry me, but that’s for another post . . .

Steven Savage

AI Zombies Hide Your Faces

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

If I were to sum up tech news of 2022 it would be “Musk” and “AI Generation.”  Enough has been written about Musk, but the use of AI to generate art and text is still fresh and needs to be discussed.

AI Generation is soulless, and I think that has not been adequately explored.  In fact, its very soullessness explains the revulsion some people rightfully feel.  There’s hatred for the use of work, for non-compensation for artists, the chance of lost jobs, but also we’re disgusted to see creative works called creative when there’s “no one home.”

I’m reminded of the Doctor Who episode “Robots of Death” with the amazing Tom Baker.  Beyond being a murder mystery, it explored “robophobia,” rooted in the idea that surrounded by human-like but not human-emoting mechanical creatures is like facing the living dead.  The Doctor was talking about what we call “the uncanny valley” these days – human-yet-not.

That’s what AI is.  Shambling would-be-people, zombies, robots, no one home.  That’s part of why we’re disgusted – but it’s worse.

Consider work that we feel connected to – some of that intimacy is shared with the creator as well. We know someone is on the other end, with goals, a style, a way of doing things.  In turn, we have a sense of the person on the other end who did their work, or screwed up, or tried.  We need that sense of connection to understand, feel safe around someone, or at least yell at them.

Creative work – from music to a news article – works when there’s a person there.  We humans need to know we can trust (or at least find and criticize) the creator.

Now let’s consider works that are derivative or calculated.  That knock-off work, that engineered political screed, they’re irritating to us because we can feel the manipulation.  Someone is being false with us, there’s an estimation on what will trigger us or appeal to us.  They might not even be who they say they are.

The person creating it is less such works is less reliable to us – unless we want to believe them.  That’s our problem for wanting to believe them, of course.

Then there’s AI work which is all calculation and manipulation.  A bunch of programs running math churns out a request that has “all the right parts,” and we perceive them as having meaning.  There’s no bright idea or inspiration at the center, no human ideas, not even the assurance someone wants to con us.  There’s a pile of words or pixels creating the illusion of value.

AI gives us a shambling zombie writing dead prose, or a robot pushing buttons it was told to, without the honor of having someone to hate directly for it..  It cannibalizes other, meaningful work without caring and gives nothing in return.  It’s a simulation of a person bearing a bright idea or an understandable nightmare.

AI brings no human connection to the experience.  It’s an attempt to create empty content, an illusion of humanity with no one to know or trust or criticize.  It’s void of meaning except that which we accidentally give it because it didn’t mean anything to the creator.  It’s a trick made by an undead set of equations.

These zombies are being used to manipulate us to drive advertising and sales.  That horror you feel in your gut is warranted because people want to flood the internet with soulless crap, and it’s inhuman.

Your disgust is quite human – and warranted.

Steven Savage