(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort. Find out more at my newsletter, and all my social media at my linktr.ee)
(I’d like to discuss Large Language Models and their relatives – the content generation systems often called AI. I will refer to them as “AI” in quotes because they may be artificial, but they aren’t intelligent.)
Fears of “AI” damaging human society are rampant as of this writing in May of 2023. Sure, AI-generated Pizza commercials seem creepily humorous, but code-generated news sites are raking in ad sales and there are semi-laughable but disturbing political ads. “AI” seems to be a fad, a threat, and a joke at the same time.
But behind it all, even the laughs, is the fear that this stuff is going to clog our cultures with bullshit. Let me note that bullshit has haunted human society for ages.
Disinformation has been with us since the first criminal lied about their whereabouts. It has existed in propaganda and prose, skeevy gurus and political theater. Humans have been generating falsehoods for thousands of years without computer help – we can just do it faster.
Hell, the reason “AI” is such a threat is that humans have a long history of deception and the skills to use it. We got really good doing this, and now we’ve got a new tool.
So why is it so hard for people to admit that the threat of “AI” exists because of, well, history?
Perhaps some people are idealists. To admit AI is a threat is to admit that there are cracks and flaws in society where propaganda and lies can slither in and split us apart. Once you admit that you have to acknowledge this has always been happening, and many institutions and individuals today have been happily propagandizing for decades.
Or perhaps people really wanted to believe that the internet was the Great Solution to ignorance, as opposed to a giant collection of stuff that got half-bought out by corporations. The internet was never going to “save” us, whatever that means. It was just a tool, and we could have used it better. “AI” isn’t going to ruin it – it’ll just be another profit-generating tool for our money-obsessed megacorporate system, and that will ruin things.
Maybe a lot of media figures and pundits don’t want to admit how much of their jobs are propaganda-like, which is why they’re easily replaced with “AI.” It’s a little hard to admit how much of what you do is just lying and dissembling period. It’s worse when a bunch of code may take away your job of spreading advertising and propaganda.
Until we admit that the vulnerabilities society has to “AI” are there because of issues that have been with us for a while, we’re not going to deal with them. Sure we’ll see some sensationalistic articles and overblown ranting, but we won’t deal with the real issues.
Come to think of it, someone could probably program “AI” to critique “AI” and clean up as a sensationalist pundit. Now that’s a doomsday scenario.