In Superintelligence: The Idea That Eats Smart People, Maciej Pinboard nee Cegłowski talks about analyzing AI risk with the perspective of the "outside view":
Say that some people show up at your front door one day wearing funny robes, asking you if you will join their movement. They believe that a UFO is going to visit Earth two years from now, and it is our task to prepare humanity for the Great Upbeaming. […] The outside view tells you something different. These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you're dealing with a cult.
Of course, they have a brilliant argument for why you should ignore those instincts, but that's the inside view talking. […] The outside view doesn't care about content, it sees the form and the context, and it doesn't look good.
This is one of the most powerful tools in doing research. Analyzing the technical merits of a practice, or technology, or $THING is really hard. Ideally we'd give everything we explore a fair shake, but we have limited time and need heuristics to decide what to spend time on. If something fails the outside view, then we can probably skip it without investigating it further.
What does the outside view look like in software? In my experience, here are a few red flags:
None of these are definitive: some ideas can be valid while still having some red flags, and crackpot stuff might not check everything. But the more red flags you see, the more you should be nodding politely and moving to the door.
Notice that none of these are even about the $THING! That's the power of the outside view: it helps you make these decisions before you dive into the details.
Let's start with something that checks most of these boxes: Carl Hewitt and the actor model of computation.
(Not the actor model of concurrency, which was primarily developed by Gul Agha, though Hewitt nowadays takes credit for it. I'm talking about the original actor model.)
So yeah, we can safely ignore the actor model of computation, at least until someone who's not Carl Hewitt (or his disciples) makes major improvements.
Note it doesn't go both ways: someone can be wrong and not be a crackpot. As a negative example, consider Uncle Bob Martin and his corpus of work. I think he's wrong about many things and find his personal views odious. But I cannot dismiss him with the outside view test!
The outside view doesn't give us an easy answer here, so Bob's software ideas should be evaluated on their technical merits.
As a researcher, the outside view helps me filter information. As an advocate, the outside view tells me what not to do. Not just because people can sense it, but because these things are corrosive. They make the community unhealthy. So I'm always checking that my FM advocacy doesn't fall prey to these things.
Maybe that's why I use the outside view so much. I'm afraid of seeing it in myself.
This is a little subtle. Plenty of good ideas connect to a lot of other things, but do the groundwork in making those connections meaningful. "Racing thoughts" is more when the people spray connections from a firehouse without bothering to check if they actually make sense. ↩
If you're reading this on the web, you can subscribe here. Updates are 6x a month. My main website is here.