(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort. Find out more at my newsletter, and all my social media at my linktr.ee)
My regular readers will know that Dan Davies’ The Unaccountability Machine was a big influence on me. If you didn’t know this, well, you’ll probably keep hearing about it every now and then. Anyway, the short summary of this must-read book is that a lot of our systems (government business, etc.) go off the rails because they focus on a few metrics, insulate themselves and their leaders from impact, and become destructive.
There, I summarized an enormously complex book that sums up decades in a paragraph. Go me. Anyway, on to the subject.
I was reading a recent article in 404 media on how people are “staffing” companies with AI, or even discussing having entire companies that are just bots/agents. Yes, it won’t surprise you that people flush with cash or wild ideas imagine a world where they just automate everything and rake in cash. Yeah, you’re not surprised.
Nowhere’s many things wrong with this idea, from data center water burn to legal complications to AI being surprisingly crappy at many jobs. But I want to address something about what it’d be like to run a company with a bunch of stochastic systems doing work for you, because this sounds like the fears of The Unaccountability Machine taken to it’s logical conclusion. Or illogical conclusion.
Anyway, let’s imagine these AI companies, these automated companies, and what we know about AI. You have a lot of automated processes running things, running them with no moral agency because they’re not people. We know how sycophantic AI can be dangerous because it tells you what you need to know. All of this abstract and distant from real human experience, moreso because of the hype cycle.
What you’ve got here is, well, an Unaccountability Machine. A nearly completely automated company of AI agents spinning around one person is not going to get good, safe decisions. You may get something you can use to juice stock and sell off, but it won’t be safe.
What you have are devices that ape human awareness, using old data, telling people what they want, and when things go wrong the AI takes the blame. You have people insulated from real information, focused on limited measures, and using technology that will sound like it’s kissing up to them. All it is is amplifying what happens to various leaders anyway in our decaying government and business systems.
So, really, it’s just business as usual but faster. You can spin up bad ideas and unaccountability quicker.
Now I suspect a lot of this is just juicing stocks, posturing, and trying to ignore how AI costs are going to go up and legal issues will proliferate. So I’m more concerns what happens in the meantime and I doubt it’ll be good – and then of these “auto-companies” will need their work walked back.
Honestly, I hope most of them are scams. Maybe that’d be good.
I suspect Dan Davis is going to have to write yet another book.
Steven Savage