Yellow Sticky Notes And Operating Costs

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

Once, many years ago (I think in the 2010s?) I interviewed at a video editing software company to be a Project Manager. When I asked what tools they used to track work, they pointed at a glass divider covered in sticky notes. That was it, that’s how they wrote video editing software which, as you may guess, is not exactly a simple process.

If you’re familiar with Agile methods, it may not seem entirely unusual. If you’re not familiar, then I’ll summarize all-too-simply: Agile is about breaking work into small, easy, tested chunks as you go through a larger list of work. It’s basically quick, evaluated development of software in order of importance.

So sticky notes were, in theory, all you needed for Agile, especially if the Product Owner (person with The Big List Of Stuff To Do) had their act together. I’m going to assume this company had one that did since, hey, sticky notes.

This experience stuck with me. Now, some 15+ years later, having used many project management tools, having seem many technical innovations, being friends with people in tech for decades, a lot of us seem to want the sticky notes back.

We’re beset by enormous choices of tools and the tools have choices. You can buy this software package or that and integrate them. All of them have their own workflow which you have to learn, but you can also customize your workflow so you can confuse yourself your own way. Plus you have to work with everyone else’s tools together in some half-baked integration.

But when all of that doesn’t work, does the tool fix it? Nope you get to! So soon you’re downloading a spreadsheet from one tool, to load into another tool, then you have to correct the issues. That’s if you can think like the people that designed the tools or the workflow, and those people weren’t you.

Past a certain point all our new helpful tools require so much learning and reconciliation, we might want to use sticky notes. And yes, I have met people who still use sticky notes in otherwise high-tech organizations.

I’ve begun to wonder if we’ve entered an era where we’re so awash in tools that the price of learning them, customizing them, and integrating them outweighs their value. This is amplified by the latest updates and changes from vendors, companies being bought out, or regulation and policy changes. There’s a lot of change and adaption that we have to put time into so we theoretically become efficient in the time left.

And that’s before there’s a software outage somewhere in the Rube Goldberg world of ours that brings it all to a halt. I’m looking at you, Crowdstrike, I still have trauma as I write this.

I’m finding a great test of good software is to ask how it would work if it wasn’t software. What if was, I don’t know – done by yellow sticky notes? What if the software wasn’t software but a human recorded, human run physical process. Would it still make sense?

This is something I noticed working with certain medical and research software. Some of it may have old-school looks, or be specialized, but it works (and has to or people get hurt). I once took a training course on medical software and it was both insanely complex because of medical processes, but in review everything I learned made perfect sense and I could see how it’d be done on yellow sticky notes. Even I, some IT nerd who shouldn’t be allowed anywhere near a patient could figure out how this all came together – and had decades before the software existed.

Sometimes it’s worth asking “what if we did this old school” to see what the software should do and how much cost there would be in changing everything or making it incoherent.

And, hey, maybe you’ll just go back to the sticky notes. Maybe you should.

Steven Savage

Evil, Opposition, and Inscrutability

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

“Hey are these people evil or just stupid?”

“Do they really believe in what they say?”

I’ve heard this for most of my life in some form of political discussion. We see more of this right now because of the politics of 2025. But the question haunts humanity – are the terrible things people do belief or stupidity? It’s as if we want the comfort of knowing someone chose evil, because if they didn’t, then more of us can do evil for other reasons. There’s an absolution in being able to label someone stupid or foolish figuring that will never be you.

This is also something that has bedeviled me in project management, to a lesser ethical degree. One often confronts poor decisions, and as I work in IT where poor decisions accumulate in the form of code and hilarious security breaches, one confronts history as well. There is nothing like a decade of “who the hell made this call?” rattling around your head (worse, when you find the person in question might be you).

I’d like to propose a third option that sometimes people’s bad decisions can be born of opposition.

Among my many interests is why people believe conspiracy theories. A theory I’ve seen pop up a few times is they’re often a form of Oppositional Defiant Disorder – that many (not all) a conspiracy theorist believes in conspiracy theories as a form of opposition. They’re hard to talk out of it as opposition just hardens their beliefs.

We’ve all dealt with people like that (a few times, we may BE that person), where telling someone they’re wrong makes them “wrong harder.” With conspiracy theorists – especially the ones who make a living at it podcasting and writing or being in politics – many will buckle down on their beliefs. If you think about it, that means they have a believe structure that is increasingly and aggressively wrong and they act on it.

Now imagine someone making very bad decisions and choices. But not out of malice or actual believe, but literally because their entire structure is composed of ideas created in opposition to critique. They act in a form of anti-belief.

Go ahead, think over the bad choices not just in today’s politics but in finance, software, and your job. How many people made absolutely the worst decisions that would be explained by the fact that at some point they did the opposite of common sense just due to opposition to advice.

If you get very “oppositional” to good advice, you WILL construct a worldview and policies and plans based on the worst stuff you can do. It might not necessarily be evil, but as it’s a very active form of stupidity, it gets close.

Now, look at the world and ask if certain people got told no so often they literally do the worst choices only to avoid the better choices they were told?

Steven Savage

Make My Pain Original

(This column is posted at www.StevenSavage.com, Steve’s Tumblr, and Pillowfort.  Find out more at my newsletter, and all my social media at my linktr.ee)

I’ve been contemplating Artificial Intelligence producing art once again. Many of my friends are creatives, so it’s a subject of near constant discussion and we wish it wasn’t. Honestly, there are moments I wonder how people like tech critic Ed Zitron keep their sanity.

To level set what is called “AI” these days has existed for decades, collections of math and data that allow computers to perform certain complex behaviors. It’s been in medicine, in graphics programs, and so on, we just didn’t call it “AI” until recently. What we’ve got now is a bunch of language-and-drawing aping tools that burn huge amounts of power and water and are marketed by companies that aren’t profitable or fear there’s no more “big new thing.”

With that out of the way (and it will be doubtlessly be repeated in my works), let me focus on the idea of AI being a substitute for artists and writers. To get more specific, let me note that AI isn’t going to make me feel, and one thing they can’t do right is make me feel unpleasant.

Some of art and writing involves discomfort. It is a new idea that sits in your mind, warping it around conclusions you’re afraid to make. It’s a realization of disturbing truths in your past, revealed by fictional characters acting out something unpleasantly close to your own life. This discomfort can be horrific or a pleasantly painful sense of walls shattering, but art can make you uncomfortable in a good way even if it’s bad.

To give an example, I was watching the anime Headhunted in Another World, something in the sent-to-another-world isekai genre – but with a businessman so you can guess why I watched it. One element involved a species with poor reproductive capacity who were really more crossbreeds because those were more viable. That led to all sorts of disturbing elements of family, ability, and even personality affected by genes. For a “fantasy office comedy” with romcom bits, that little revelation was thought-provoking and disturbing.

Not an original show in many ways, but the creator clearly had ideas. Imagine having siblings that are ALL half-siblings, imagine having a relative who is alien to you. As is a common trend for the unkillable Isekai genre, there are some very good ideas embedded in the tropes – hough to be fair Headhunted does a lot more with the subject than some, but I am biased.

I am not for gratuitous disgust or horror or unsettling weirdness of course. However sometimes unpleasantness is a key part of creativity. You need to feel horror or you have a strange insight or something is off in a mystery or in art. Sometimes you need to look over the edge of the canyon to realize how deep it’s going and know it is deep.

I’m not saying that AI or AI-aided work can create discomfort, and when it appears to it won’t be original. AI is simply about recycling the past ideas it’s accumulated, it has no experiences or life to draw up, nothing to create new insights. That vital need for art to unsettle will, at best, be recycled disturbing ideas from the past and at worse be unsettling because AI sort of is terrible. I’m not sure we can have the proper level of disruptive insight and experience.

It’s like horror movies, which oft seem to recycle the same ideas in huge repetitive bursts. In some ways we’re still remaking Friday the 13th (though I’d argue really that’s remaking Black Christmas). One reason I tend to avoid horror films is the endless fall into sadism, jumpscares, and the same tropes. It’s why work like Radius and The Ritual really hit me, those do it right.

In short, one of the problems of AI is that, as it repeats things and has only the past, it’s not going to have that proper razor-edge feel that requires me to feel or is a sign of an unpleasantly fascinating realization. If there’s no person there there’s no one to connect with me in ways that’ll knock me off my perch and make me think and feel.

And that is just if AI has anything that might make me think or ponder disturbing truths. How easy would it be for automated scripts and novel to sheer off anything disturbing or offensive. It mighty cosplay as something original, but tweaked algorithms and careful queries can slice away anything that may anger an audience.

So, ironically, as much as AI is a pain to deal with, I feel it’s not going to unsettle me in the right way. It’s not going to make me think, confront, ponder darkly. It’ll be the same insights and weirdness and horrors and sadness of before. There’s nothing new there.

AI hype is painful to deal with, but it doesn’t seem AI can hurt in the right way.

Steven Savage