Q? Or The Algorithm?
In the film ‘Bird Box’, based on the book of the same name, humans are driven to such a degree of despair that they take their own lives when they see the shadow of their consciousness — the thing they most fear. They may even take a few people down with them along their path of self-destruction. The transformation to homicide-suicide is stark, to put it mildly — and one of the more frightening things I’ve seen on screen without just seeing a more typical, gooey, slimey monster show up on screen.
We don’t know precisely what they see, but we know it’s dark, at the depths of one’s fears and dispair. In Bird Box, the effect of seeing your shadow is swift, merciless, complete, and terminal.
The Moment of Fear and Despair
The transformation is like a switch in the brain being quickly thrown. It’s effect is so unsettingly sudden in the film that you realize there’s no escape. For those who catch a glimpse of their shadow, it’s over. It’s as if the neurological linkages between sight, thought, action is short circuited.
I mention this because the film came to mind as I watched what the Algorithm wrought last week at the US Capitol. That sudden mayhem that was real, and tangible, and kinetic and frenzied reminded me of those moments in Bird Box when everything went crazy and got real real. What darkness caused that violent trigger?
I could only think of the Algorithm.
The Algorithm has a remarkable ability to map our psyches based on these micro-UX interactions and the patterns it interprets in our behaviors, both on and off the screen. Somehow the Algorithm is able mine for these dark shadows in our consciousness and turn us into raving, mindless lunatics.
I think there may be a correlation between the shadow that lurks in us, the Algorithm, and what popped off at the Capitol. I don’t know for sure, but it seems like we saw a mob that was hopped up on falsehoods that reached epidemic proportions, and that epidemic was fomented by the Algorithm, which exponentially propagated those falsehoods into the minds of people who thought it was all a good, righteous, justified idea to hunt down law makers, pillage and plunder their offices, and then, like..crow about it openly on the Internet. WTF?
How did the Algorithm help all this along?
Well, as near as I can figure, the Algorithm just assumes that, heck — if we ‘like’ what the UX designers designed for us to see, then we want more of it, just like drug addicts like their drug and want more of it. And the Algorithm is running at a high clock rate, and it doesn’t get tired so it just keeps giving us more and more, and we take more and more for hours every day, day after day.
And if we’re not attentive to what’s actually happening to our brains to the degree that we mistake the Algorithm for light and truth, rather than as the seductive tendrils of an operating system built to monetize our fragile psyches, the Algorithm will happily, efficiently, effortlessly feed us shadow and lies, cause it seems like that’s what we’re really into. The outcome is that we risk the peril of absorbing its effects into our bodies and our consciousnesses and becoming an embodiment of that darkness and those lies. And do dumb shit like attack people or go into homicidal rages.
It’s like developing a fondness for a powerful, synthetic opiod or booze or cigarettes, or gambling and telling yourself you have it all under control, or that you’re just being social, or this epistemology is right and the other one about ‘facts’ is all wrong. We’ve heard all this before, but nevertheless — we let the Algorithm have the run of the world.
It’s an unbridled lunacy machine, and the difference is that those other addictive materials at least have some kind of robust comprehension as to their potentially deleterious downsides as well as some authority around them that attempts to rationally manage their consumption for the greater good.
Faceboro
The Algorithm monetizes this evolutionary defect in our psyches towards this addictive behavior. This is the most unsettling aspect. The Algorithm is a business model. So now there’s a tangible incentive to make the Algorithm more effective and incisive, which isn’t ‘better’ in the normative sense of things.
This business is a dirty one — it leverages a human defect in order to make a shit-ton of money. And then it addicts the engineers and UX researchers and product designers through the morphine drip of huge salaries. A vicious circle, like a car crash that never ends.
(Parenthetically, there was an Algorithm maker engineer person who wrote a thorough and precise Medium post about why and how they made so much money doing what they do to make it more effective. They seemed to represent that their pay was for their acumen at tending to the Algorithm without even a passing consideration that maybe they were financially rewarded because their work caused more people to get caught in the Algorithm’s frenzied machinations. They thought they were exempt from recrimination because they were just doing a good job at an honest trade, like the drug kingpin’s accountant who attempts to justify their role by saying they just look after the books.)
The beneficiaries of the Algorithm would have to stop doing what they’re doing, reorganize as a B-Corp, focus all their resources on stopping the spread of hate — and also refocus all those clever engineers and bright UX and ‘product’ designers to do nothing but work on algorithms to mitigate the climate disaster, and figure how to manage economic inequity and democratic reform or something with a righteous ethical basis and then, maybe, there’s a redemption story.
Phew. That feels better having said all that.
Well — so you’re wondering — what’s this have to do with Design Fiction, eh? I wondered the same myself as I was contemplating this topic since last Wednesday.
For sometime I’ve been referring to ‘The Algorithm’ as if it were a singular beast of some sort. Mostly with a wry bit of humor attached to it. Any time the shadow of that thing poked through a ‘User Experience’ I’d say something like, ‘There it is! See it! There’s that menacing Algorithm, creeping around just at the edges of your screen!’
It got to where friends would say the same thing, nodding knowingly - ‘Hey, yo! Look — the Algorithm just auto-completed!’ Or, ‘How’d the Algorithm know I thought about buying this tea kettle!?’
Usually, it was mildly annoying, innocuous stuff like this. It was nothing that felt like it was going to take over your consciousness so long as you knew what was happening, how it happened, and kept a kind of critical distance from it with a bit of digital hygiene. You know, like taking a break by uninstalling that App — a bit of self-care like stopping after the second drink. Or clearing caches and cookies routinely. Or maybe reading the Silicon Valley equivalent of medical journals to keep up on the latest techniques the UX and ML world is developing so you can, at a minimum, be informed as to the ways that the Algorithm is getting after you. The digital world equivalent of a daily exercise routine or brushing your teeth after meals.
Last week a friend of mine told me he was going to smoke a bit of weed to settle his mind with everything going on and get a good night’s sleep but then the Algorithm got him and pulled him back onto Twitter and he completely forgot his best laid plan for a bit of a medicated respite. So much for self-care in the episteme of the Algorithm, eh?
Design Fiction Stimulus: The FDAA
So then, what are the images of a better future that recognizes the benefits of some kind of algorithms, but ones that are better behaved? Not the future, but one of many possible habitable futures?
How about some Design Fiction prompts and stimuli? Images and imaginaries that helps us understand the possibility of a world where we can live with the upside of all those clever engineers’ ingenuity without that ingenuity being used to feed us mercilessly to this uncaring, unsympathetic, business operating system that just wants to monetize that evolutionary defect in our brain that the Algorithm exploits every millisecond?
And I’ve been pondering this since last week. Thinking about the different ways to express some hopeful, beneficial, reasonable alternative futures as Design Fictions. What is a world in which the Algorithm is somehow tamed or better managed?
I started to imagine short little snips of conversations that might occur in that possible near future. For example, a near future in which we look back at this moment and think, “Boy. That was wild, eh? I’m glad those ingenious AI and ML engineers and UX researchers and designers developed a professional creed sorta like the Hypocratic Oath for medical professionals and that they all pledge to adhere to it when they finish their studies at that Stanford’s Behavior Design Lab.”
That might wind up as an exercise to actually write that fictional oath — as a Design Fiction. What does it say? Who takes this oath? What society or agent or moral anchor compels oath-takers to adhere to that oath? Are AI and ML engineers and UX designers professionalized and state licensed so that there’s some authority that can sanction them if they misbehave and make bad algorithms again? Is there an exam to make sure you can safely operate as this kind of professional? I mean..Arturo my barber had to take and pass a state exam. I think barber’s probably are less at risk of fomenting an insurrection than a social media platform, so doesn’t sound too far out there.
Here’s another one where someone says, “Yeah, that was a tough patch, but the industry finally realized that, actually, people are happier and more likely to use their service with this regulated, better behaved Algorithm. It’s like when automakers got over their reluctance and actually designed really well-thought through seatbelts and active restraints so people stopped flying through windshields and dying in car crashes.”
Is that an acknowledgment that there was something wrong here and we’re going to genuinely fix it because we actually don’t want the reputational damage of making the world more deadly, nor do we want people to die because then it looks like our Corporate Values and Ethics are, well — that we don’t have any.
Maybe it’s, “Well, it may be best if an independent, publicly-managed third party like the FDA took an active role in managing this stuff because the Algorithm is a behavior-modifying neurotoxin like nicotine, opiods, and a lot of other classified products the consumer is exposed to so it kinda makes sense that such an administrative agency would play a role in sorting all this out.”
These are the ‘What Ifs?’ that can provoke the Design Fiction work in a variety of directions, all of which are worth making tangible for the purposes of evolving the imagination through imaging.
Most really good ‘What Ifs’ for a little couple-hour design sprint like this would be audacious, provocative, and probably make most of Silicon Valley gavel the table with a knock of a knuckle and leave the room in bafflement.
I get it. Feels threatening to have anyone think about a different future than the one you feel you deserve because you’re a genius as evidenced by your paycheck that keeps growing which means you must be doing something extraordinarily right and just.
But sometimes creating more habitable futures, like one in which, despite it operating against a free-wheeling business model, means the model needs to adapt before it destroys the world. And the adaption needn’t mean the end of business. It may just mean that enough’s enough and its time for a correction.
I stuck on the third one — the image of a future with something called the FDAA — as my response to the quandry. Any of these or others would suffice, but I just wanted to quickly sprint towards some single bit of stimulus that might serve to open the conversation.
I’m not suggestiong that I’ve solved anything, and I’m certainly not convinced by this provocation — that’s not the point of Design Fiction. This particular output is what we call “stimulus” — it’s just a nudge to spark a conversation around a topic.
I quickly cobbled this Design Fiction stimulus together. It uses the archetype of a moment of UI interaction we’re all familiar with — the pop-up cautionary confirm-or-decline moment. It’s got lots to it, partially for the sake of being provocative, but it was also a good exercise to try and enumerate all the deleterious effects I could think that result from flat-spinning into the Algorithm.
The idea of an FDAA is something I see as a kind of residue of a sequence of events that maybe start in one particular near future in which, I don’t know — let’s say Rep. Ted Lieu puts forth legislation that’s debated for months and months and that finally establishes the FDAA (Food, Drug and Algorithm Administration) — an evolution of the existing Food and Drug Administration (FDA).
Normally in a typical Design Fiction exercise there would be quite a few of these stimuli, all meant to help set context, frame and image possibilities and warm things up. There would also ideally be a group of stakeholders contributing and gathering and discussing. But, you know — pandemic and all. This is just one, done quickly, alone in the studio. Nevertheless — some quick reactions to the ‘FDAA UI warning pop-up’ archetype after letting it marinate for a day.
My first reaction to my own stimuli after letting it marinate for a day is, eww..more bureaucracy susceptible to influence lobbying, revolving doors from industry to government and back again, ineffectual, watered-down legislation, etc. But I try to resist dismissing the notion out of hand. “No bad ideas” when creatively imaging possible futures and all that, even when you’re working alone under quarantine.
My second reaction is that I like the strength of this signal from the near future. It suggests someone taking responsibility and doing some really hard work to get such an outcome. I imagine this is the kind of work that requires a heck of a lot of persistence and dedication and wily political maneuvering. It comes across as meaningful, solid, righteous work that is genuine and actually wants to make things better in a more selfless way than making sure the bottom line keeps growing.
My third reaction is, wow — this Algorithm is really dangerous. Look at all of those downside effects. It’s like that Big Pharma drug that looks like it makes the world a better place until they tell you at the end that, for some people, it may give some people a heart attack or make it so you can’t sleep very well, or a rash. These stimuli suggest that we need some entity to manage the Algorithm better than it’s being managed today. It also implies that the body politic doesn’t really trust the Jacks or the Marks of the world to monitor and manage themselves and their influence on the public discourse, no matter how Zen-bearded they may be.
These tensions between good and bad reactions, between light and shadow are what make a Design Fiction thorough and useful — and what opens up the conversation and helps us reconcile the good intentions with the unintended and deleterious outcomes. And, ultimately, imagine and design better futures.
So, that was pretty heavy stuff. Next week, we’ll discuss best practices for applying warm caramel sauce onto ice cream.