It’s The Ones We Noticed

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

People developing psychosis while using Chat GPT has been in the news a lot. Well, the latest story is about an Open AI investor who seemed to lose it in real time, leading to shall we say concerns. The gentleman in question seemed to spiral into thinking the world was like the famous SCP Foundation collective work.

Of course people were a little concerned. A big investor AI losing his mind isn’t exactly building confidence in the product or the company. Or for that matter, investing.

But let me gently suggest that the real concern is that this is the one we noticed.

This is not to say all sorts of AI bigwigs and investors are losing their minds – I think some of them have other problems or lost their minds for different reasons. This isn’t to say the majority of people using AI are going to go off into some extreme mental tangent. The problem is that AI, having been introduced recently, is going to have impacts on mental health that will be hard to recognize because this is all happening so fast.

Look, AI came on quick. In some ways I consider that quite insidious as it’s clear everyone jumped on board looking for the next big thing. In some ways it’s understandable because, all critiques aside (including my own), some of it is cool and interesting. But like a lot of things we didn’t ask what the repercussion might be, which has been a bit of a problem since around about the internal combustion engine.

So now that we have examples of people losing their minds – and developing delusions of grandeur – due to AI, what are we missing?

It might not be as bad as the cases that make the news – no founding a religion or creating some metafiction roleplay that’s too real to you. But a bit of an extra weird belief, that strange thing you’re convinced on, something that’s not as noticeable but too far. Remember all the people who got into some weird conspiracies online? Yeah, well, we’ve automated that.

We’re also not looking for it and maybe it’s time we do – what kind of mental challenges are people developing due to AI that we’re not looking for?

There might not even be anything – these cases may just be unfortunate ones that stand out. But I’d really kind of like to know, especially as the technology spreads, and as you know I think it’s spreading unwisely.

Steven Savage

It’s Bad It’s So Bad It’s Good

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

All right, it’s time to talk AI again. This also means I have to use my usual disclaimer of “what we call AI has been around for awhile, it’s been very useful and is useful, but we’re currently in an age of hype that’s creating a lot of crap.” Anyway, there, packed that disclaimer into one sentence, go me.

I’ve seen “AI-ish stuff” for 30 years, and the hype for it is way different this time.

Watching the latest hype for “AI” (that pile of math and language that people are cramming into everything needed or not) I started listening to the hype that also seemed to be a threat. We have to build this. We have to build this before bad guys build it. We have to build a good AI before a bad AI. This may all be dangerous anyway!

Part of current AI marketing seems to be deliberately threatening. In a lot of cases it’s the threat of AI itself, which you know, may not be a selling point. I mean I don’t want a tool that might blow up in my face. Also Colossus: The Forbin Project freaked me out as a kid and that was about competing AI’s teaming up so you’re not selling me with the threat that we have to make AI to stop AI.

But this marketing-as-threat gnawed at me. It sounded familiar, in that “man, that awful smell is familiar” type way. It also wasn’t the same as what I was used to in tech hype, and again, I’ve worked in tech for most of my life. Something was different.

Then it struck me. A lot of the “hype of the dangerous-yet-we-must-use-it” aspects of AI sounded like the lowest form of marketing aimed at men.

You know the stuff. THIS energy drink is SO dangerous YET you’re a wimp if you don’t try it. Take this course to make you a super-competitive business god – if you’re not chicken, oh and your competitors are taking it anyway. Plus about every Influencer on the planet with irrelevant tats promising to make you “more of a man” with their online course. The kind of stuff that I find insulting as hell.

Male or female I’m sure you’re used to seeing these kind of “insecure dude” marketing techniques. If you’re a guy, you’re probably as insulted as I am. Also you’d like them to stop coming into your ads thanks to algorithms.

(Really, look online ads, my prostate is fine and I’m not interested your weird job commercials).

Seeing the worst of AI hype as being no different than faux-macho advertisements aimed to sell useless stuff to insecure guys really makes it sit differently. That whiff of pandering and manipulation, of playing to insecurity mixed with power fantasies, is all there. The difference between the latest AI product and untested herbal potency drugs is nill.

And that tells me our current round of AI hype is way more about hype than actual product, and is way more pandering than a lot of past hype. And after 30+ years in IT, I’ve been insulted by a lot of marketing, and this is pretty bad.

With that realization I think I can detect and diagnose hype easier. Out of that I can navigate the current waters better – because if your product marketing seems to be a mix of scaring and insulting me, no thanks.

Steven Savage

Think of the Warehouses

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

In one of those online discussions I wish I kept a link to, someone posed a comment along the lines of “Imagine how many warehouses we’d need to store the data we have if we didn’t have computers?” For a moment I thought that “yes, that’d take a lot of space” followed by me getting a lot more thoughtful.

I love a good exercise of “what if we didn’t have X/did X” even if it means contemplating the horror of a world without video games. So let’s imagine all the data we collect by computer today and if we had to store it and move it physically – with the occasional phone call to get someone to dig around in a box.

Think about all the data you have to fill out on the job and in your life, all the forms and orders and everything else. Imagine it if you had to do it on paper, file it store it, mail it. Quite a lot isn’t it? Imagine the nonexistent warehouses your employer and government would need.

Now, ask yourself why we collect all of that data, because you know what, I bet we don’t need it.

How many fields and forms do you fill out because the software is collecting data based on some default setting? Pay a bunch of money to a SaaS vendor, flip on all the settings, and go. There has to be a reason for all those fields, right? Why assume that? We’ve made it easy to collect data for no good reason or by accident.

Now imagine if all that unneeded data needed warehouses

In fact, on that subject, how much software and setup collects data “just in case” or “because someone asked?” Someone in a department that’s part of another department figured they might need the data. Someone else figured you add that extra field so they don’t get in trouble. Software gives us an amazing ability to create more work for ourselves fast.

More data. The imaginary warehouses get larger.

Then with all of this data we’re collecting that we don’t need and don’t want (and probably get wrong) there are going to be horrible errors. We’re going to have to hunt for information we forgot we didn’t need anyway. We’re going to loose data because we filled out that other form we didn’t need. That just generates more data to track down the errors in our data.

We’d need warehouses to store data about errors in our warehouses.

All of those above complaints/rants/notes also make it much harder to collect and store the actual data we need. We can’t even use the warehouses we have and they’re imaginary.

The purpose of this extended, self-indulgent metaphorical walk is to illustrate painfully a truth we’re all low-key aware of. We collect too much damn data we don’t need and it makes things worse. It’s so easy to get information, put in a web field, or scan a document that we rarely stop to ask if we need any of it or if it does any good.

Thinking about computing systems and asking “what if we had to store this physically” is a great way to find out how much we care.

I honestly wished such a metaphorical exercise wasn’t so useful – this is me, I like technology. We should be asking if we need data, if it’s hard to collect it, how much risks we’re creating by collecting all of this.

But if a physical example is needed, as I think it is these days, so be it.

Steven Savage