AI: Same As We Never Admitted It Was

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

(I’d like to discuss Large Language Models and their relatives – the content generation systems often called AI.  I will refer to them as “AI” in quotes because they may be artificial, but they aren’t intelligent.)

Fears of “AI” damaging human society are rampant as of this writing in May of 2023.  Sure, AI-generated Pizza commercials seem creepily humorous, but code-generated news sites are raking in ad sales and there are semi-laughable but disturbing political ads.  “AI” seems to be a fad, a threat, and a joke at the same time.

But behind it all, even the laughs, is the fear that this stuff is going to clog our cultures with bullshit.  Let me note that bullshit has haunted human society for ages.

Disinformation has been with us since the first criminal lied about their whereabouts.  It has existed in propaganda and prose, skeevy gurus and political theater.  Humans have been generating falsehoods for thousands of years without computer help – we can just do it faster.

Hell, the reason “AI” is such a threat is that humans have a long history of deception and the skills to use it.  We got really good doing this, and now we’ve got a new tool.

So why is it so hard for people to admit that the threat of “AI” exists because of, well, history?

Perhaps some people are idealists.  To admit AI is a threat is to admit that there are cracks and flaws in society where propaganda and lies can slither in and split us apart.  Once you admit that you have to acknowledge this has always been happening, and many institutions and individuals today have been happily propagandizing for decades.

Or perhaps people really wanted to believe that the internet was the Great Solution to ignorance, as opposed to a giant collection of stuff that got half-bought out by corporations.  The internet was never going to “save” us, whatever that means.  It was just a tool, and we could have used it better.  “AI” isn’t going to ruin it – it’ll just be another profit-generating tool for our money-obsessed megacorporate system, and that will ruin things.

Maybe a lot of media figures and pundits don’t want to admit how much of their jobs are propaganda-like, which is why they’re easily replaced with “AI.”  It’s a little hard to admit how much of what you do is just lying and dissembling period.  It’s worse when a bunch of code may take away your job of spreading advertising and propaganda.

Until we admit that the vulnerabilities society has to “AI” are there because of issues that have been with us for a while, we’re not going to deal with them.  Sure we’ll see some sensationalistic articles and overblown ranting, but we won’t deal with the real issues.

Come to think of it, someone could probably program “AI” to critique “AI” and clean up as a sensationalist pundit.  Now that’s a doomsday scenario.

Steven Savage

Expected Enjoyment

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

I was discussing popular works with Serdar, and both had experienced the pressure to enjoy something everyone else was enjoying.  I felt it had gotten worse in the last two decades and was honestly getting the hell on my nerves.  There were more choices, but it seemed more pressure to like certain things, and I’ve been trying to articulate it.

I grew up with “Must See TV” and every year had some blockbuster in the theater, but that was different.  Dallas was big, but people seemed to accept it might not be your cup of tea – and I was ten, so I didn’t care.  I loved Star Wars, but it was a bolt-of-lightning thing, and no one expected everyone to like it.  There were Big Things, but I don’t recall the sheer pressure to like them.

The ever-expanding world of cable television, foreign films, anime, and the internet brought us even more options.  In the 1990’s the idea of something being Mandatory Fun (apologies to Weird Al) was alien to me – there was something for everyone and more of it all the time.  Why have something feel mandatory?

Then came Harry Potter.  I am loathe to discuss it due to the author’s horrid transphobia, but as this is a historical rant and thus I strive for accuracy.

Harry Potter was something everyone seemed into, and I felt pressure to read it, which irritated the hell out of me.  I think the fact that it was an internet sensation made it omnipresent, people didn’t get you might not be into it because all their friends were.  It was an internet-fueled Blockbuster.

(I did eventually read it, by the way, after people had backed off.)

To this day, the internet and social media have a selective amplification effect.  Something can take off, amplified by social media algorithms and good marketing, and soon you’re sick of hearing about it. Chats, posts, memes, etc. all amplify certain things repeatedly – people doing marketing for free.  At some point, you’re missing having a political argument with your crazy relatives because they’re busy telling you about this new TV show you have to watch.

The wealth of movies, shows, and books we have doesn’t free us either – and I blame social media and marketing for that as well.  People can easily find fellow fans – and assume everyone else has similar interests.  Algorithm-driven ads target you relentlessly.  More choices somehow led to more pressure, and we’ve forgotten not everyone cares about the same things.  Now we just have more not to care about.

Finally, you have the synergy of media universes: Marvel, Star Trek, and Star Wars.  These giant unified properties (and marketing efforts) amplify each other.  Show A leads to movie B, leads to webseries C, all funneling you into a giant media matrix.  Throw in social pressure and social media amplification trying to manipulate you, and you start feeling like you’re a very poor take of They Live, only you’re not as cool as Rowdy Roddy Piper.

We’re living inside a giant marketing machine of technology and social habits.

I’m not proposing a way out, I’m here to analyze and complain.  Perhaps I’ll present some brilliant solutions in the future, but right now, I understand better, saying “no” more, expanding my horizons, and just doing what I like.

Maybe I’ll have more to say.  But now I’m just glad to have it out of my head – and into yours.  So I’d love your thoughts.

Steven Savage

AI and Chatbots: Better Someone To Hate Than A Machine

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

AI and Chatbots are in the news as people want to use them for everything – well at least until reality sets in.  Now I don’t oppose Chatbots/AI or automated help with a humanized interface.  I think there’s potential for it that will make our lives better.  They really are spicy autocomplete and there’s a role for that, even if we all remember how we hated Clippy.

The problem is that there’s too many cases people want to use so-called AI just replace humans.  I think it will go wrong in many ways because we want people to connect to, even if only to hate them.

If you’ve ever screamed “operator” into a phone after navigating some impossible number-punch menu you have a good idea of how Chatbots could be received.

When we need help or assistance, we want to talk to a person.  Maybe it’s for empathy.  Maybe it’s to have someone to scream at.  Either way we want a moral agent to talk to someone we know has an inner life, and principles, even if we disagree with them.

There’s something antisocial about chatbots just replacing humans.  It breaks society and it breaks our need for contact (or blame).

Have you ever observed some horrible computer or mechanical failure?  Have you imagined or participated in the lawsuits?  Imagine how that will go with Chatbots.

Technology gives us the ability to do things on a huge level – but also create horrible disasters.  Imagine what Chatbots can automate – financial aid, scientific research, emergency advice.  Now imagine that going wrong on a massive, tech-enabled scale.  Technology let us turn simple things into horrible crises.

If you have people along the way in the process?  They can provide checks.  They can make the ethical or practical call.  But when it’s all bots doing bot things with bots and talking to a person?  There’s that chance of ending up in the news for weeks, in government hearings for months, and lawsuits for years. 

(Hell, removing Chatbots removes some poor schmuck to take the blame, and a few people with more money and sense might find they really want that.)

Have you ever read a book or commissioned art and enjoyed working with the artist?  Chatbots and AI can make art without that connection.  Big deal.

Recently I read a person grouse about the cost of hiring an artist to do something – when they could just go to a program.  The thing is for many of us, an artistic connection over literature or art or whatever is also about connecting with a person.

When we know a person is behind something we know there’s something there.  We enjoy finding the meaning in the book, the little references, the empathic bond we form with them.  An artist listens to us, understands us, brings humanity to the work we request.  It makes things real.

I read a Terry Pratchett book because it’s Terry Pratchett.  I watch the Drawfee crew as it’s Jacob, Nathian, Julia, and Karina who I like.

Chatbot-generated content may be interesting or inspiring, but it’s just math that we drape our feelings around.  AI generated content is just a very effective Rorschach blot.  There’s no one to admire, learn from, or connect with behind it.

Humanity brings understanding, security, checks, and meaning.

So however the Chatbot/AI non-Revolution goes?  I think it will be both overdone and underwhelming.  It will include big lawsuits and sad headshakes.  But ultimately if there’s an attempt to Chatbot/AI everything, it’ll be boring and inhuman.

Well, boring and inhuman if we know there’s chatbots there.  It’s the hidden ones that worry me, but that’s for another post . . .

Steven Savage