But What If It’s Not Worth Doing?

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

OK this isn’t another post on AI exactly. I get it, there’s a lot of talk of AI – hell, I talk about it a lot, usually whenever Ed Zitron goes on a tear or my friends in tech (IE all my friends) discuss it. If I was friends with Ed Zitron, who knows what I’d write.

The funny thing about AI is that it’s about automation. Yes it’s complex. Yes it’s controversial. Yes, it lets you generate pictures of Jesus as a Canadian Mountie (Dudley Do-Unto-Others?). But it’s automation at the end of the day. It’s no different than a clock or a pneumatic delivery system.

And, referencing a conversation I had with friends, when you automate something on the job or at home, let’s ask a question – should you have been doing it anyway?

First, if you get something you have to automate, should it be assigned to you? If something really isn’t part of your portfolio of work, maybe someone else should do it. Yes, this includes things like home tasks and that includes the shelves you have not and almost certainly will not put up.

A painful reality I’ve come to realize is that many people take on tasks someone else can do, and often do better. However due to whatever reason it drifts up to them and of course they stick with it. Worse, the really good people often would be better at it, and maybe even have more time and hurt themselves less.

A need to automate something often says “I don’t need to do it and I may be bad at it” and the task should move up or down or somewhere else. I’m not saying automate, it, I’m saying reassign it – to someone that may automate it anyway, but still.

Secondly, and more importantly, if you have a task that can be automated it’s time to ask if anyone should be doing it period.

Anything really important needs a person, a moral authority to make a decision. You have both the decision making skills and the ethical ability to make the right decision. Automation certainly doesn’t have the ethical element, and if it doesn’t need your decision making skills . . . why are you or anyone doing it.

The task might be unnecessary. It could – and trust me I see this a lot – be the result of other automatic generation or other bad choices. It may be a signoff no one needs to sign off on, an automatic update you don’t need to be updated on, or who knows what else. I honestly think a lot of work is generated by other automatic processes and choices that could just bypass people anyway.

But there’s also the chance the task is unneeded, shouldn’t exist, or really a bad idea. Look if the task is assigned to you, a competent individual with good morals, and you want to automate it maybe it just should never have existed. Much as good Agile methods are about making sure you don’t do unneeded work, process is the same.

Whenever something has to be automated, it’s a good time to ask “why did it come to me anyway?” Because the answer may save you time automating, instead letting you hand it off, change how things work, or just ignore it.

And that’s not just AI. That’s anything.

Steven Savage

It’s The Ones We Noticed

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

People developing psychosis while using Chat GPT has been in the news a lot. Well, the latest story is about an Open AI investor who seemed to lose it in real time, leading to shall we say concerns. The gentleman in question seemed to spiral into thinking the world was like the famous SCP Foundation collective work.

Of course people were a little concerned. A big investor AI losing his mind isn’t exactly building confidence in the product or the company. Or for that matter, investing.

But let me gently suggest that the real concern is that this is the one we noticed.

This is not to say all sorts of AI bigwigs and investors are losing their minds – I think some of them have other problems or lost their minds for different reasons. This isn’t to say the majority of people using AI are going to go off into some extreme mental tangent. The problem is that AI, having been introduced recently, is going to have impacts on mental health that will be hard to recognize because this is all happening so fast.

Look, AI came on quick. In some ways I consider that quite insidious as it’s clear everyone jumped on board looking for the next big thing. In some ways it’s understandable because, all critiques aside (including my own), some of it is cool and interesting. But like a lot of things we didn’t ask what the repercussion might be, which has been a bit of a problem since around about the internal combustion engine.

So now that we have examples of people losing their minds – and developing delusions of grandeur – due to AI, what are we missing?

It might not be as bad as the cases that make the news – no founding a religion or creating some metafiction roleplay that’s too real to you. But a bit of an extra weird belief, that strange thing you’re convinced on, something that’s not as noticeable but too far. Remember all the people who got into some weird conspiracies online? Yeah, well, we’ve automated that.

We’re also not looking for it and maybe it’s time we do – what kind of mental challenges are people developing due to AI that we’re not looking for?

There might not even be anything – these cases may just be unfortunate ones that stand out. But I’d really kind of like to know, especially as the technology spreads, and as you know I think it’s spreading unwisely.

Steven Savage

They Can’t Stand Humanity

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

As is my usual, I’ve got an obsession, and if you follow me you know my latest is about Ed Zitron’s Business Idiots – his explanation for why things are messed up, “leaders” living in abstract bubbles away from reality. Zitron hit on something that summed up things I’d seen elsewhere, that some so-called business leaders end up isolated from reality and some people find that to be a goal Since then, I’ve been chewing this over – like Dan Davies or Ted Giola, Zitron got me thinking.

As I’ve been analyzing the Business Idiot phenomena, it struck me that some Business Idiots actually don’t seem to like people. I won’t be naming names, but you can guess.

I first began thinking about this when I noticed some Business Idiots having a rising anti-diversity mindset. As if acknowledging people’s differences is some kind of assault on their senses and so on. Of course really it’s a mix of political opportunism and a belief in their own superiority (which is easy when you hit the jackpot and spend ten years with yes-men). The thing is humanity is diverse, and the idea that you don’t have to deal with that tells me you just don’t want to deal with people – unless they’re little clones of you.

And clones of you aren’t really people, but the Business Idiot can’t bear to have their world intruded on by anything but the same thing.

This of course also goes into the weird natalism of some Business Idiots. The people who suddenly want a harem and a ton of kids. The people who get real worried about birthrates (at least of some colors of people) yet don’t acknowledge how hard it is to raise a kid in many countries. The people who talk having more kids while forgetting our world is really becoming inhospitable.

Again, wanting a world of people like them (as well as being such Business Idiots that they don’t want to face Climate Change). And they don’t want people, they want copies – something my friend Serdar even speculated on in his book Flight of the Vajra.

But really if you want to get the Business Idiots not liking people, just look at the endless emphasis on AI replacing people. They’re giddy over the idea of getting rid of so many people to replace them with slop, half-baked ideas, and things that “so-called AI can’t do. And yes, insert my usual disclaimer on AI here, but still.

They’re selling us a world with less people – and less people different from them. The Business Idiots don’t like people.

Yet, there’s more. Some Business Idiots get obsessed with life extension and self-perfection, going to ridiculous lengths. Biomonitoring, slamming supplements, dropping ritual hallcuinogens with no instructions, etc. There’s a point where this isn’t so much refining the self (a term I like as it implies a calm approach) but outright attempts to beat the self into a new form.

They don’t even like themselves, these Business Idiots.

Of course it’s no surprise, the Business Idiots, from nepo babies to people who won the VC lottery at the right time then lost their minds, live in a world insulated from humanity. They live in a world of yes-men and confirmation bias, grifters and hangers-on. Past a certain point you have to loose your mind a little bit because you’re outside of reality.

People remind you of reality. Even your aging face reminds you of reality.

So we may laugh at the Business Idiots. But I’m really coming to the conclusion that some of them don’t like us that much and we need to deal with that.

Steven Savage