Teaching Implication Better
Hello everyone! It's finally March, or at least close enough to March for my purposes. First thing, we're a month off from April Cools! April cools is a lesscringe version of April fools, where content creators like me publish content that is both genuine and totally out of genre. Last year I took a break from software engineering to write about microscopy. Other people wrote about singing church music, marathon food, and how to read rot13. If you've got a blog, I'd heartily recommend joining! It's a lot of fun.
Anyway, between that and a bunch of work obligations, this is going to be a real busy month for me. I'm still committed to updating the newsletter six times a month, but it might be a bit erratic: instead of the alternating 121 schedule I've been on, it might be 2202 instead. Just a heads up.
(If you want to make the newsletter a little easier on me, why not send in a question? If I get a critical mass of questions, or one really really deep question, I'll do a newsletter about 'em.)
Now, two of the things this month are related: the TLA+ workshop in March and the predicate logic book I'm working on.^{1} Predicate logic is a necessary requirement for using TLA+ well, but also has a lot of uses outside that. So I'm always trying to figure out how to teach it better. And two topics in predicate logic cause me the most trouble:
 Alternating quantifiers: working with statements of the form "for every X there's a Y" and "there's a Y that for every X". We'll skip this one today.
 Implication:
P => Q
means that if P is true, then Q is also true. That makes sense to most people. The problem is the truth table:
P  Q  P => Q 

T  T  T 
T  F  F 
F  T  T 
F  F  T 
False implies true and false implies false. People hate this and find it unintuitive. But the same properties that make it confusing also make it really useful, which sucks.
I normally teach it like this:
 Define P => Q
 Show the truth table, admit that it's unintuitive
 Show a lot of examples of why it needs to be this way
 Throw up my hands and tell people they'll just have to work with it a bunch to build an intuition.
 Start using P => Q to solve problems in predicate logic and hope people grok it eventually.
Locks before keys
Now there's this really good piece on teaching functional programming, called Locked doors, headaches, and intellectual needs. It draws from a theory in game design called problemsolution ordering issues: if you present players with a door then a key, they will learn that the key unlocks the door. But if you give them the key first, they won't make the causal link to why it's important. The author then analogizes it to learning abstract ideas:
Remember how, in high school math class, a lot of the work you were doing felt really, really pointless? (Even if you never felt that way, I can all but guarantee that at least a few of your former classmates did.) We often think of this feeling of pointlessness as an inevitable byproduct of math education. But what if, instead, it’s partly a consequence of problemsolution ordering issues in the way we currently teach math?
I believe a stronger version of this: what if the solution is defined in terms of the problem? This is more in line with the motivations behind applied math: we have a problem, we come up with a specific approach to it, and then we realize the solution has broader properties we can examine on their own.
The purpose of implication
Implication has one dominating use case: putting conditionals on universal quantifiers.
Take the requirement "all accounts should have 2FA enabled." That's easy to express as a predicate statement:
all a in Accounts: 2FA(a)
But now let's say that only admin accounts must have 2FA enabled, and it's optional with everyone else. One way we can add that is to filter the set of all accounts to only include Admin accounts:
all a in {a in Accounts: Admin(a)}: 2FA(a)
And then let's say that the new requirement was added at the beginning of the year, so if the account hasn't logged in since then, it doesn't need 2FA. Adding that to the set filter:
all a in {a in Accounts: Admin(a)
&& lastlogin(a) > 20220101}:
2FA(a)
So even with two simple additional conditions on our requirement, it already gets pretty nasty to model those conditions as set filters. This'll get worse if the property relates two elements in separate sets, or if we have independent sufficient factors, or anything else like that.
So instead, we add a special operator P => Q
that means "the right hand side only needs to hold if the lefthand side does, too."
all a in Accounts:
Admin(a)
&& lastlogin(a) > 20220101
=> 2FA(a)
Now implication is introduced to solve a problem. And now the weirdness can be introduced as "what properties does implication need, given it's intended to solve this problem?"
In this case, we know that the universal quantifier has to be true for all possible elements of Accounts. If you pass in a nonadmin account, then Admin(a) => 2FA(a)
still needs to be true. If it's not, then we can't use it to place conditions on properties, which is the reason we invented it in the first place. So to serve its intended purpose, we define P => Q
such that Admin(a) => 2FA(a)
is automatically true whenever the Admin(a)
is false. Which means that F => T && F => F
.
The explanation is still a bit rough, but I've tried a dozen other explanations for implication and this is by far the happiest I've been with one. What I also like, besides motivating the need for the weird properties, is that it underscores how math is a tool that serves us. We define implication this way because it's useful to do so. We could just as easily define a more "intuitive" operator, say >=>
, where !(F >=> T)
, if we find a good use for such an operator.
One thing I want to emphasizing in the logic book is that math is valuable because it's so flexible. I think people have this misconception that since math really rigorous, it must also be rigid and inflexible. But that rigor instead makes it creative and dynamic. It's a language where you can invent entirely new symbols and grammar, and as long as they're clear and consistent, other mathematicians will accept it as valid.^{2} Like you know how x != y
is the same as !(x == y)
? Why can't I write in a programming language that x !&& y
or x !in array
? In math I can just say x !op y ≡ !(x op y)
and run with it. I'd have trouble if I tried to write x !&&y !&& z
, which is probably why I can't do that in programming, but as long as I disambiguate or avoid that context, I'll prolly be fine.

I said I hoped an early version for the book would be ready by the end of Winter. Not happening, sorry :( Full update and explanation later this week! ↩

They may be skeptical, because it's really easy to hide mistakes and confusion in new notation. You have to put in the work to show that you're know what you're doing. ↩
If you're reading this on the web, you can subscribe here. Updates are once a week. My main website is here.