Take Some Responsibility

You probably heard the news: Air Canada had to pay up for something an “AI” chatbot said. This story saddens me as I love flying on Air Canada. Honestly in my trips up there the flight is often part of the fun.

Basically a guy asked an Air Canada chatbot on advice on canceling due to bereavement, it gave him advice on refunds that was wrong. He followed the advice and of course when he had to cancel, he didn’t get his refund, and made a small claims complaint to the appropriate body. Air Canada argued – seriously – the chatbot is a legally distinct entity and that the guy shouldn’t have trusted the advice, but followed a link provided by the chatbot which had gotten things wrong.

Obviously, that didn’t fly, excuse the really stupid pun.

As an IT professional who’s career is “older than One Piece” let me weigh in.

I work in medical technology (indeed, it’s my plan to do this for the rest of my career). We vet everything we install or set up. We regularly review everything we set up. We have support systems to make sure everything is working. This is, of course, because you screw up anything medical and bad things happen.

Also it’s because someone that goes into medical anything is usually pretty responsible. We IT folks are in the mix everyday and know the impact of our job. We also work with – and sometimes are or were – doctors and nurses and other medical professionals who get it.

I love working in this environment. If this appeals to you, I can honestly say check out working in medicine, medical research, and education. It’s awesome.

Know what? Other people using technology can and should take the same level of responsibility.

Technology is a choice. What you use, how you implement it, how you expose people to it, all of that is a choice. You built it or paid for it or whatever, you take responsibility if it goes wrong, be it a life or someone deserving a refund.

If the product isn’t what you thought? Then those who made it owes you an apology, wad of cash, corporate dissolution, whatever. But either way someone takes responsibility, because technology is a choice.

We’ve certainly had enough of moving fast and breaking things, which really seems to just result in enshitification and more and more ways to be irresponsible.

Besides, reputation is involved, and if nothing else saying “we don’t care of our technology on a website goes wrong” is going to make people question everything else you do. I mean, if you were on an Air Canada plane after hearing about this “sorry, not our fault” approach how safe are you going to feel?

Let’s try to be responsible here.

Steven Savage

The Money In Cleanup

I have an acquaintance that helps migrate businesses off of ancient and inappropriate databases onto more recent ones. If you wonder how ancient and inappropriate let me simply state “not meant for industry” and “first created when One Piece the anime started airing” and you can guess. Now and then he literally goes and cleans up questionable and persisting bad choices.

In the recent unending and omnipresent discussions of AI, I saw a similar proposal. A person rather cynical about AI mused someone might make a living in the next few years backing a company’s tech and processes OUT of AI. Such things might seem ridiculous, until you consider my aforementioned acquaintance and the fact he gets paid to help people back out past decisions. Think of it as “migration from a place you shouldn’t have migrated to.”

It’s weird to think in technology, which always seems (regrettably) to be about forward motion and moving forward that there’s money in reversing decisions. Maybe it was the latest thing and now it’s not, or maybe it seemed like a good idea at the time (it wasn’t), but now you need someone to help you get out of your choice. Fortunately there are people who have turned “I told you so” into a service.

I find these “back out businesses” to be a good and needed reminder that technology is really not about forward. Yeah, the marketing guys and investors may want it, but as anyone who’s spent time in the industry knows, it’s not the case. Technology is a tool, and if the tool doesn’t work or is a bad choice, you want out of it. The latest, newest, fasted is not always the best – and may not be the best years later. Technology is not always about forward, even if someone tells you it is (before they sell you yet another new gizmo).

Considering the many, many changes in the world of tech, from social media to search to privacy, I wonder how much more “back out businesses” might evolve. Will there be coaches to get you to move to federated social media? How can you help a company get out of a bad relationship with a service vendor with leaky security and questionable choices? For that matter can we maybe take a look at better hosting arrangements and websites that aren’t ten frameworks in a trenchcoat?

I don’t know, and the world is in a terribly unpredictable state. But I’m amused to think that somewhere in my lifetime the big tech boom might be “oops, sorry.” Maybe we can say “moving away is really moving forward,” get some TED talks, and make not making bad immediate choices cool.

Steven Savage

Long-term Language Misery

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

AI is irritatingly everywhere in news and discussions as I write this, like sand after a beach trip. Working in IT, I could hold forth on such issues as reliability, power consumption or how they’re really “Large Language Models” (Clippy on steroids). But I’d like to explore something that does not involve complaining about AI – hold your surprise.

Instead, I’d like to complain about people. What can I say, sometimes you stick with traditions.

As is often noted in critique of AI is they really are sort of advanced autocomplete, which is why I prefer the term Large Language Model (LLM). They don’t think or feel or have morals, anything we attribute to humans and intelligence. They just ape the behavior, delivering information and misinformation in a way that sounds human.

(Yeah, yeah it’s a talk about AI but I’m going to call them LLM. Live with it.)

However when I look at LLM bullshit, misinformation, and mistakes, something seems familiar. The pretend understanding, the blatant falsehood, the confident-sounding statements of utter bullshit. LLM’s remind me of every conspiracy theorist, conspirtualist, political grifter, and buy-my-supplement extremist. You could replace Alex Jones, TikTok PastelAnon scammers, and so on with LLMs – hell, we should probably worry how many people have already done this.

LLM’s are a reminder that so many of our fellow human beings spew lies no differently than a bunch of code churning out words assembled into what we interpret as real. People falling for conspiracy craziness and health scams are falling for strings of words that happen to be put n the right order. Hell, some people fall for their own lies, convinced by by “LLM’s” they created in their own heads.

LLM’s require us to confront many depressing things, but how we’ve been listening to the biological equivalent of them for so long has got to be up there.

I suppose I can hope that critique of LLMs will help us see how some people manipulate us. Certainly some critiques to call out conspiracy theories, political machinations, and the like. These critiques usually show how vulnerable we can be – indeed, all of us can be – to such things.

I mean we have plenty of other concerns about LLMs an their proper and improper place. But cleaning up our own act a certainly can’t hurt.

Steven Savage