In my last email I drew comparisons between the rituals of Communion and Instagram, and both rituals ring hollow without remembering the heart behind them.
For a brief moment last year Silicon Valley’s hottest new product was an email app called Superhuman. The email service costs $30/month (!) and their marketing sounds like the first draft of a voiceover for a luxury car commercial:
“Superhuman is not just another email client. We rebuilt the inbox from the ground up to make you brilliant at what you do. We specifically designed it for those of you who want the best…Superhuman is so fast, delightful, and intelligent — you’ll feel like you have superpowers.”
(Side note — I love how they couldn’t resist explaining the punchline of their own product name even though it’s the most obvious thing in the world.)
Superhuman started their marketing blitz last June – Venture Capitalists evangelized the app on Twitter and the New York Times published an article called “Would You Pay $30 a Month to Check Your Email? One of Silicon Valley’s buzziest start-ups, Superhuman, is betting its app’s shiny features are worth a premium price.”
One of those “shiny features” is what Superhuman calls “Read Receipts.” While the New York Times failed to mention any details about the feature, early-access Superhuman user (and former VP of Design at Twitter) Mike Davidson wrote a 4,000+ word blog post about it: Superhuman is Spying on You.
In Mike’s article (which is one of the most nuanced, thoughtful reflections I’ve ever read on how product decisions get made in Silicon Valley), he explains everything that’s wrong with Superhuman’s “read receipts” feature:
“You’ve heard the term “Read Receipts” before, so you have most likely been conditioned to believe it’s a simple “Read/Unread” status that people can opt out of. With Superhuman, it is not. If I send you an email using Superhuman (no matter what email client you use), and you open it 9 times, this is what I see: a running log of every single time you have opened my email, including your location when you opened it.”
Meaning: if I use Superhuman to send you an email, I can see when, where, and how many times you opened my email – regardless of what email app you use. Without you knowing. And to make matters worse:
“Superhuman never asks the person on the other end if they are OK with sending a read receipt (complete with timestamp and geolocation). Superhuman never offers a way to opt out.”
In his post, Mike imagines three short stories to highlight the potential for abuse enabled by this feature. I’ve excerpted the first sentence of each:
“An ex-boyfriend is a Superhuman user who pens a desperate email to his former partner.”
“A pedophile uses Superhuman to send your child an email. Subject: “Ten Tips to Get Great at Minecraft”.”
I’m sure that no one at Superhuman wanted their email product used in that way, which raises two questions:
Did anyone at Superhuman think about the potential problems of exposing location data without consent?
If someone did raise concerns, why were they ultimately overruled?
Unfortunately, Superhuman’s oversight isn’t an anomaly. There are countless examples of how Silicon Valley’s blindly optimistic view of tech (among other attributes) creates “unintended consequences” that can wreak irrevocable harm.
Perhaps the most famous example: content recommendation algorithms. What was originally designed to help you “discover” (to use Valley terminology) things you’re interested in also enables the rapid spread of misinformation and the development of extremist views.
Or ad targeting. What was originally designed to show you the perfect pair of shoes also enables the spread of political propaganda and fear-mongering to the most susceptible demographics.
In Superhuman’s case, there are several simple solutions to the problem they introduced: they could require users to opt-in to sharing their location, or remove the feature altogether (though Superhuman did change the feature in response to Mike’s post, he still felt that the changes were inadequate).
But as more and more of the core technology woven into our lives becomes software oriented and algorithmically driven, finding solutions to “unintended consequences” gets thornier. It’s sometimes literally impossible to understand why things are happening the way they are. The algorithms are black boxes.
As Abeba Birhane, a PhD candidate in Cognitive Science at University College Dublin, writes in The Algorithmic Colonization of Africa:
Data and AI seem to provide quick solutions to complex social problems. And this is exactly where problems arise. Around the world, AI technologies are gradually being integrated into decision-making processes in such areas as insurance, mobile banking, health care and education services.
The issue with this change is that
Society’s most vulnerable are disproportionally affected by the digitization of various services. Yet many of the ethical principles applied to AI are firmly utilitarian. What they care about is “the greatest happiness for the greatest number of people,” which by definition means that solutions that center minorities are never sought.
But their voice needs to be prioritized at every step of the way, including in the designing, developing, and implementing of any technology, as well as in policymaking. This requires actually consulting and involving vulnerable groups of society, which might (at least as far as the West’s Silicon Valley is concerned) seem beneath the “all-knowing” engineers who seek to unilaterally provide a “technical fix” for any complex social problem.
Birhane’s suggestions for a creative process sound a lot like what I’ve been reading about Jesus in the Gospel of Luke. Jesus says that he came
“to proclaim good news to the poor…to proclaim freedom for the prisoners and recovery of sight for the blind, to set the oppressed free.”
That does sound like good news. If I could do those things, I’d like to do them. But it also sounds lofty.
(I’ll refraining from making a “if Jesus was making apps he wouldn’t collect your location data” joke oops I just made it)
I’ve spent this email pointing the finger outward. It’s time to point the finger back at myself. Superhuman, algorithms, Birhane, Jesus…
How am I pushing myself into the edges, to consider minorities, to involve vulnerable groups of society?
I’ll end each newsletter with a question that’s been placed on my heart after writing. If you feel like it resonates, please reply to this email with your reflections! Over the course of the newsletter I’ll share some of the responses (anonymously) at the bottom of the next email. I hope we can share this journey together.
When have you been marginalized? Did someone fight to include you?
Last email’s question:
What kind of rituals — spiritual or secular — do you take part in, and why?
I try to start every morning writing down three things from the day before and thank God for them. It’s hard to know how that small thing has changed my thoughts, but I hope it has!
I think a lot about my intentions in doing the things I do. I definitely feel that the vast majority of instagramming is done for others, not for ourselves. I don’t post nearly as much as I used to because I can’t get past the thought that maybe I’m only posting because it’ll make others think of me a certain way. Of course I still have my camera roll but the fancy highlight reel will not be there. Kinda makes me wish I posted…but for myself, not for others.
One practice that I’ve implemented in a more intentional way is communal prayer. We’ve been gathering every Monday, Wednesday, and Friday mornings at 5am for an hour of prayer. I used to pride myself (sorta speaking) in not being a morning person, but now mornings fulfill me and fuel me because Jesus keeps pouring in as He meets me there!