Long-term Language Misery

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

AI is irritatingly everywhere in news and discussions as I write this, like sand after a beach trip. Working in IT, I could hold forth on such issues as reliability, power consumption or how they’re really “Large Language Models” (Clippy on steroids). But I’d like to explore something that does not involve complaining about AI – hold your surprise.

Instead, I’d like to complain about people. What can I say, sometimes you stick with traditions.

As is often noted in critique of AI is they really are sort of advanced autocomplete, which is why I prefer the term Large Language Model (LLM). They don’t think or feel or have morals, anything we attribute to humans and intelligence. They just ape the behavior, delivering information and misinformation in a way that sounds human.

(Yeah, yeah it’s a talk about AI but I’m going to call them LLM. Live with it.)

However when I look at LLM bullshit, misinformation, and mistakes, something seems familiar. The pretend understanding, the blatant falsehood, the confident-sounding statements of utter bullshit. LLM’s remind me of every conspiracy theorist, conspirtualist, political grifter, and buy-my-supplement extremist. You could replace Alex Jones, TikTok PastelAnon scammers, and so on with LLMs – hell, we should probably worry how many people have already done this.

LLM’s are a reminder that so many of our fellow human beings spew lies no differently than a bunch of code churning out words assembled into what we interpret as real. People falling for conspiracy craziness and health scams are falling for strings of words that happen to be put n the right order. Hell, some people fall for their own lies, convinced by by “LLM’s” they created in their own heads.

LLM’s require us to confront many depressing things, but how we’ve been listening to the biological equivalent of them for so long has got to be up there.

I suppose I can hope that critique of LLMs will help us see how some people manipulate us. Certainly some critiques to call out conspiracy theories, political machinations, and the like. These critiques usually show how vulnerable we can be – indeed, all of us can be – to such things.

I mean we have plenty of other concerns about LLMs an their proper and improper place. But cleaning up our own act a certainly can’t hurt.

Steven Savage

Evil Agile

We wonder how people can get away with so much horrible stuff.  I’d like to talk Evil and Agile productivity, and yes, I am completely sober as far as you know.

For those of you who are in no way familiar with me, I’m a Project Manager, a professional help-stuff-get-done-guy.  While I’m being paid to be the most anal-retentive person in the room, I prefer to use Agile Methodologies, which are all about rapid, adaptable, approaches to getting things done.  It doesn’t sound Evil, but stick with whatever journey I’m soberly on because I think Evil people are actually pretty good at a kind of Agile.

Many Evil people have A Goal.  It may be (more) money and power, it may be dealing with their childhood traumas, and usually, it’s a dangerously pathetic combination of things like that.  Agile is all about Goals because when you set them, they direct your actions more than any single plan.  You gotta know where you want to go to get there.

Then, simply, Evil people set out to achieve their Goal by whatever means they can.  They don’t care if they lie, cheat, steal, burn books, burn people, and so on – the Goal is what matters.  Agile is also about making sure that your actions direct you toward your Goal so you’re focused and efficient – it just doesn’t involve Evil.

But what if Evil people hurt others, get caught, etc.?  Simple, they lie or do something else because they don’t care – they adapt.  Agile emphasizes constant adaptability and analysis as well, just with an emphasis on truth and honesty.  Evil people are pretty adaptable, even if that adaptability is staying the course and lying about it until others give up.

Agile emphasizes goals, directing yourself towards them, and adaptability.  Evil people do the exact same thing.  The only difference is that Agile emphasizes helping people and being honest, and Evil people are just Evil.

And this is why we’re so often confused by Evil people.

We expect elaborate plans from Evil people – and there may be some – but they’re focused on their Goals and how to get there.  We expect Evil people to be derailed by getting caught in lies or hurting people, but as we’ve seen they don’t care.  They want something and they’ll adapt no matter the price played by other people.

It’s the banality of Evil all over again.  Evil isn’t even interesting in how it gets things done.

Steven Savage

Wondering How Long We’ll Care

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

We’ve got the SAG-AFTRA strike.  Big Studios and groups like Netflix seem to be very interested in replacing real people with AI – and we know they won’t stop no matter the deals made.  Ron Pearlman and Fran Drescher are apparently leading the Butlerian Jihad early.

As studios, writers, and actors battle I find myself caring about the people – but caring far less about the media produced.  There’s so many reasons not to care about Big Media.

You’d think I’d be thrilled to see Star Wars, Marvel Comics, and Star Trek everywhere!  But it’s so many things are omnipresent it sucks the oxygen out of the room.  Even when something is new, it can be overhyped.  If it’s not everywhere, it’s marketed everywhere and I get tired of it all.  Also damn, how much anime is there now?

The threat of AI replacing actors and writers removes that personal connection to actors and writers and creators.  There was already a gap anyway as groups of writers created shows and episodes, abstracting the connections with the creators.  The headlong rush into AI only threatens to make me care less – I can’t go to a convention and shake hands with a computer program or be inspired to write just as good as a program.

We have plenty of content made already anyway.  I could do with a good review of Fellini, maybe rewatch Gravity Falls again, and I recently threatened to watch all of One Piece for inexplicable reasons.  Plus of course I have tons of books.

Finally, there’s all sorts of small creators new and old I should take a look at.  Maybe I don’t need the big names anymore.  Hell, the small creators are easier to connect with.

Meanwhile all of the above complaints are pretty damned petty considering the planet is in a climate crisis and several countries are falling apart politically and economically.  I’m not going to care about your perfect AI show when the sky turns orange because of a forest fire.

I have a gut feel I’m not alone in the possibility of just kind of losing interest in the big mediascape.  We may have different triggers for giving up, but there’s a lot of possible triggers.  Plus, again, potential world crises create all sorts of possibilities.

Maybe that’s why the “Barbenheimer” meme was so joyful, with people discussing these two very different films as a kind of single phenomena.  It was spontaneous, it was silly, it was self-mocking.  Something just arose out of the big mediascape (and two apparently good films), a very human moment it seems we’re all too lacking.

Maybe it’s a reminder we can care about our media.  But it the chaotic times we face in a strange era of media, I wonder if we’ll remember it as a fond exception.

Steven Savage