Dada And Empty Media

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

Though i don’t discuss it as much here, I have an interest in the art movement of Surrealism and its origins. Surrealism is fascinating in its many manifestations, it intersects with politics and culture movements, and the many personalities and people are compelling. As I continue to learn about it, I keep finding new lessons, one of which I want to share here.

Surrealism’s origins are rooted in Dada, an art movement that appeared post World War I that was mistrustful of the supposed age of reason and the horrors of the time. Dada appeared to be art, in form of paintings or performances and such, but was intentionally nonsensical. Today it may seem amusing, but at the time people found it infuriating – imagine giving a speech made of nonsense words and angry folk rioting.

Dada laid the groundwork for Surrealism, something else I may discuss, but what fascinated me most about Dada beyond that was that it used the framework of existing media and filled it with nonsense. What an idea that the container of art can be abstracted from any meaningful content! Perhaps its easy to understand people angered by Dada, confronted with a play or a song or a painting that had the form of work but was filled with nothing

You can remove the art from art but still have a form we associate with art.

That idea has sat with me for some time since I had it, but I hadn’t done much with it – as my interests were in Surrealism and how the artistic framework was a vehicle for unconscious, almost spiritual expression. But lately I thought about Dada using a framework of art filled with nonsense and internet content and what we learn from it.

It’s hard to find anyone who won’t complain about nonsense, slop, propaganda, and low-effort content on the internet. I certainly do as any of my regular readers knows, and to my gratitude, tolerate. I’m sure you’re also used to encountering and complaining of such things.

We wonder how people can take such things seriously. How they can fall for propaganda or low-info listicles and the like? Well that’s because, beyond our vulnerabilities or ability to enjoy trash, it comes in the form of information. Internet dross has the shape of information or art or spiritual insight even if it’s filled with B.S.

No different than how Dada took the form of art and blew people’s minds by delivering rampant nonsense.

Think about how easily technology lets us have the form of something useful. It’s easy to spin up a website or a book or a video, pour anything into premade patterns, even go to technology or freelancers to pour something into whatever information container we chose. We have the tools to make nothing look like something, to make form so good we easily mistake it for solid value.

And, sometimes, it rubs us the wrong way. We know it looks like information but it’s not. Maybe it’s easier to understand people enraged over Dada, tricked by form. We’re in the Uncanny Valley of Communication just like they were.

This is why the history of art and media matter and why I treasure these rabbit holes I go down. The past has many lessons for the present. Come to think of it, maybe if we pay more attention to the past we’ll have a better present . . . one with not just form but form delivering real meaning and valuable information.

Steven Savage

But What If It’s Not Worth Doing?

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

OK this isn’t another post on AI exactly. I get it, there’s a lot of talk of AI – hell, I talk about it a lot, usually whenever Ed Zitron goes on a tear or my friends in tech (IE all my friends) discuss it. If I was friends with Ed Zitron, who knows what I’d write.

The funny thing about AI is that it’s about automation. Yes it’s complex. Yes it’s controversial. Yes, it lets you generate pictures of Jesus as a Canadian Mountie (Dudley Do-Unto-Others?). But it’s automation at the end of the day. It’s no different than a clock or a pneumatic delivery system.

And, referencing a conversation I had with friends, when you automate something on the job or at home, let’s ask a question – should you have been doing it anyway?

First, if you get something you have to automate, should it be assigned to you? If something really isn’t part of your portfolio of work, maybe someone else should do it. Yes, this includes things like home tasks and that includes the shelves you have not and almost certainly will not put up.

A painful reality I’ve come to realize is that many people take on tasks someone else can do, and often do better. However due to whatever reason it drifts up to them and of course they stick with it. Worse, the really good people often would be better at it, and maybe even have more time and hurt themselves less.

A need to automate something often says “I don’t need to do it and I may be bad at it” and the task should move up or down or somewhere else. I’m not saying automate, it, I’m saying reassign it – to someone that may automate it anyway, but still.

Secondly, and more importantly, if you have a task that can be automated it’s time to ask if anyone should be doing it period.

Anything really important needs a person, a moral authority to make a decision. You have both the decision making skills and the ethical ability to make the right decision. Automation certainly doesn’t have the ethical element, and if it doesn’t need your decision making skills . . . why are you or anyone doing it.

The task might be unnecessary. It could – and trust me I see this a lot – be the result of other automatic generation or other bad choices. It may be a signoff no one needs to sign off on, an automatic update you don’t need to be updated on, or who knows what else. I honestly think a lot of work is generated by other automatic processes and choices that could just bypass people anyway.

But there’s also the chance the task is unneeded, shouldn’t exist, or really a bad idea. Look if the task is assigned to you, a competent individual with good morals, and you want to automate it maybe it just should never have existed. Much as good Agile methods are about making sure you don’t do unneeded work, process is the same.

Whenever something has to be automated, it’s a good time to ask “why did it come to me anyway?” Because the answer may save you time automating, instead letting you hand it off, change how things work, or just ignore it.

And that’s not just AI. That’s anything.

Steven Savage

It’s The Ones We Noticed

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

People developing psychosis while using Chat GPT has been in the news a lot. Well, the latest story is about an Open AI investor who seemed to lose it in real time, leading to shall we say concerns. The gentleman in question seemed to spiral into thinking the world was like the famous SCP Foundation collective work.

Of course people were a little concerned. A big investor AI losing his mind isn’t exactly building confidence in the product or the company. Or for that matter, investing.

But let me gently suggest that the real concern is that this is the one we noticed.

This is not to say all sorts of AI bigwigs and investors are losing their minds – I think some of them have other problems or lost their minds for different reasons. This isn’t to say the majority of people using AI are going to go off into some extreme mental tangent. The problem is that AI, having been introduced recently, is going to have impacts on mental health that will be hard to recognize because this is all happening so fast.

Look, AI came on quick. In some ways I consider that quite insidious as it’s clear everyone jumped on board looking for the next big thing. In some ways it’s understandable because, all critiques aside (including my own), some of it is cool and interesting. But like a lot of things we didn’t ask what the repercussion might be, which has been a bit of a problem since around about the internal combustion engine.

So now that we have examples of people losing their minds – and developing delusions of grandeur – due to AI, what are we missing?

It might not be as bad as the cases that make the news – no founding a religion or creating some metafiction roleplay that’s too real to you. But a bit of an extra weird belief, that strange thing you’re convinced on, something that’s not as noticeable but too far. Remember all the people who got into some weird conspiracies online? Yeah, well, we’ve automated that.

We’re also not looking for it and maybe it’s time we do – what kind of mental challenges are people developing due to AI that we’re not looking for?

There might not even be anything – these cases may just be unfortunate ones that stand out. But I’d really kind of like to know, especially as the technology spreads, and as you know I think it’s spreading unwisely.

Steven Savage