Long-term Language Misery

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

AI is irritatingly everywhere in news and discussions as I write this, like sand after a beach trip. Working in IT, I could hold forth on such issues as reliability, power consumption or how they’re really “Large Language Models” (Clippy on steroids). But I’d like to explore something that does not involve complaining about AI – hold your surprise.

Instead, I’d like to complain about people. What can I say, sometimes you stick with traditions.

As is often noted in critique of AI is they really are sort of advanced autocomplete, which is why I prefer the term Large Language Model (LLM). They don’t think or feel or have morals, anything we attribute to humans and intelligence. They just ape the behavior, delivering information and misinformation in a way that sounds human.

(Yeah, yeah it’s a talk about AI but I’m going to call them LLM. Live with it.)

However when I look at LLM bullshit, misinformation, and mistakes, something seems familiar. The pretend understanding, the blatant falsehood, the confident-sounding statements of utter bullshit. LLM’s remind me of every conspiracy theorist, conspirtualist, political grifter, and buy-my-supplement extremist. You could replace Alex Jones, TikTok PastelAnon scammers, and so on with LLMs – hell, we should probably worry how many people have already done this.

LLM’s are a reminder that so many of our fellow human beings spew lies no differently than a bunch of code churning out words assembled into what we interpret as real. People falling for conspiracy craziness and health scams are falling for strings of words that happen to be put n the right order. Hell, some people fall for their own lies, convinced by by “LLM’s” they created in their own heads.

LLM’s require us to confront many depressing things, but how we’ve been listening to the biological equivalent of them for so long has got to be up there.

I suppose I can hope that critique of LLMs will help us see how some people manipulate us. Certainly some critiques to call out conspiracy theories, political machinations, and the like. These critiques usually show how vulnerable we can be – indeed, all of us can be – to such things.

I mean we have plenty of other concerns about LLMs an their proper and improper place. But cleaning up our own act a certainly can’t hurt.

Steven Savage

When Good Things Are Bad Ideas

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

In Project Management there’s something called the Iron Triangle or the Project Management Triangle.  A project has to balance between Time, Scope, and Cost to keep up quality.  You can have two the way you want at best, but the third will become unpredictable, unlimitable, or you’ll have to accept some serious changes.

If you want things done your way on time, get ready for it to cost more.  If you want something at a set cost and scope, get ready for time to get a might out of control.  If you want things on time and for a set cost, get ready to reduce your scope.  Play too fast and loose and things will fall apart.

We’re taught that doing things Fast (time), Accurately (Scope), and Cheap (cost).  But those things aren’t always good and can’t always be done together.  We Project Managers remind people of this again and again, often with “I told you so.”

Which leads me to our current crisis in social media where everything is, well, rather dumb.  I have no idea where the hell Twitter is actually going.  Facebook keeps trying new things, but the core experience is kinda ad-filled and unpleasant.  There’s not a lot of innovation out there, and it’s becoming more and more clear we’re the product.

But when you think of the Iron Triangle it all makes sense.  Social Media companies want to have it all ways – making money (cost) do everything to keep people and advertisers (scope) and do it all fast (time).  As people like me constantly remind folks you cannot do this.

Sometimes cheap, effective, and fast are bad ideas.  My job – my own habits – lead me to wanting to be cheap, effective, and fast and I know they’re not always good.

Social media is “free” but the money has to come from somewhere and people invested in it want to make money.  This means the enshitification we’ve seen is near inevitable.  People don’t want to pay, advertisers aren’t always happy, and executives want to make the big bucks.  That may not be sustainable.

Cost is a problem in social media (and that cost isn’t always money).

Social media has to provide some service but there aren’t a lot of new ideas (look at all the Twitter clones), and way too much seems to be well we got used to it.  I’m suspicious that a lot of social media we love now is habit not it’s stuff we actually need.  Throw in companies trying to do everything or anything regardless if it can work or people want it?

What’s the scope for social media?  Hell, who’s the real customer?  The users aren’t exactly unless you charge appropriately and that brings in the cost problem.

Finally, sure social media is efficient in some ways – you do a lot, fast, in a unified interface.  Sure technology lets us deliver features fast.  But is fast good?  Who needs new features we don’t care about?  Is it really vital we be able to reply immediately to someone’s movie opinions?  So we need to do everything from one app that’s also potentially vulnerable?

What’s the real timeframe we need with our social media – if we need social media as we know it now?

Social Media has walked face-first into the Iron Triangle which would normally collapse projects and businesses.  But they got enough of a footprint, did enough right at first that they can keep going, maybe forever.  But at best right now a lot of them are a mix of pet projects and money extraction machines, and maybe lawsuit fodder.

Some of us might even get to say “I told you so.”  Well, more than we have.

Steven Savage

AI: Same As We Never Admitted It Was

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

(I’d like to discuss Large Language Models and their relatives – the content generation systems often called AI.  I will refer to them as “AI” in quotes because they may be artificial, but they aren’t intelligent.)

Fears of “AI” damaging human society are rampant as of this writing in May of 2023.  Sure, AI-generated Pizza commercials seem creepily humorous, but code-generated news sites are raking in ad sales and there are semi-laughable but disturbing political ads.  “AI” seems to be a fad, a threat, and a joke at the same time.

But behind it all, even the laughs, is the fear that this stuff is going to clog our cultures with bullshit.  Let me note that bullshit has haunted human society for ages.

Disinformation has been with us since the first criminal lied about their whereabouts.  It has existed in propaganda and prose, skeevy gurus and political theater.  Humans have been generating falsehoods for thousands of years without computer help – we can just do it faster.

Hell, the reason “AI” is such a threat is that humans have a long history of deception and the skills to use it.  We got really good doing this, and now we’ve got a new tool.

So why is it so hard for people to admit that the threat of “AI” exists because of, well, history?

Perhaps some people are idealists.  To admit AI is a threat is to admit that there are cracks and flaws in society where propaganda and lies can slither in and split us apart.  Once you admit that you have to acknowledge this has always been happening, and many institutions and individuals today have been happily propagandizing for decades.

Or perhaps people really wanted to believe that the internet was the Great Solution to ignorance, as opposed to a giant collection of stuff that got half-bought out by corporations.  The internet was never going to “save” us, whatever that means.  It was just a tool, and we could have used it better.  “AI” isn’t going to ruin it – it’ll just be another profit-generating tool for our money-obsessed megacorporate system, and that will ruin things.

Maybe a lot of media figures and pundits don’t want to admit how much of their jobs are propaganda-like, which is why they’re easily replaced with “AI.”  It’s a little hard to admit how much of what you do is just lying and dissembling period.  It’s worse when a bunch of code may take away your job of spreading advertising and propaganda.

Until we admit that the vulnerabilities society has to “AI” are there because of issues that have been with us for a while, we’re not going to deal with them.  Sure we’ll see some sensationalistic articles and overblown ranting, but we won’t deal with the real issues.

Come to think of it, someone could probably program “AI” to critique “AI” and clean up as a sensationalist pundit.  Now that’s a doomsday scenario.

Steven Savage