(Note, there's some important administrative stuff after the newsletter.)
I think a lot about the nature of expertise.
While I write about all sorts of software topics, I'm actually paid to do formal specification work. Historically that's meant running workshops for companies or reviewing their specs. This means that I experience "being in expert" in relation to other people who are complete beginners, and that's influenced my thinking of what "expert" means. Often beginners get stuck on small things and pitfalls like using the wrong data type or not knowing the standard way to solve a problem. So I think of an "expert" as being someone who's internalized all the standard moves and can just get on with solving the problem.
In other words, the main difference between a beginner and an expert is the expert has "seen it all before". They can quickly solve standard problems and can pattern-match new problems to old ones. But once they've built their mental models, the beginner will be able to do anything the expert can... just a lot slower.
This year I was also involved in two extremely large specification projects, one in TLA+ and one in Alloy. The projects were for very large and very complicated systems, and both took a lot more work to specify than the average production system. I joined the projects as a trainer and coach, but in both cases I was much more useful as a sort of "technique inventor". Formal specification is still a surprisingly young field and there's a lot that people don't know how to do. I had to figure out how to do it.
For example, it's generally very hard to compose two specifications together. People want to have a library of prefabbed specs they can glue together in the same way you can do that in programming, but conventional wisdom is you shouldn't do this, you should write them all together as one big spec. That's usually the better approach.
But in the TLA+ project we needed to model complex interactions between several distinct components and each component had complex internal behavior that we also needed to specify. The only way to do this is to make separate large specifications and combining them together. Except specifications don't easily compose. My role, then, was figuring out how to compose them anyway.
I solved it with something like this: first define each component's abstract state machine in a very particular style, where the state transitions where encoded as data. Then refine that into an actual component that uses action conjunction to tie implementation state updates to transitions. With that, it's possible to "compose" specs by linking their transitions with "synchronization points", which ultimately made the whole thing work.
Don't worry if that didn't make any sense— it took a while to explain to competent TLA+ practitioners too, and they got to see diagrams and actual code samples. Once we were on the same page it worked incredibly well, though. I should do a blog writeup of the technique, I think it'd be useful to a lot of people.
(Not every technique I came up with was this complicated, in fact a lot were much easier. Like in this article on optimization, I share a trick to rapidly profile optimizations: adding
TLCGet("level") < N as a state constraint. It was really easy to show people but I don't think it's something a non-expert practicioner would come up with on their own.)
Inventing these techniques felt qualitatively different from the "pattern matching on problems" I think of as expertise. It's like if I learned English purely as a technical language, and then realized that "gear" sounds the same as "fear", and from that self-invented poetry. It sounds a little pretentious saying it that way, but at the same time I'd guess it's something you've all experienced, because I'm sure you're all experts in something or another.
Besides that TLA+ project, I'm also building a large system in Alloy. We're still working on it but even now it might be the largest Alloy production model, ever. Certainly the largest one to use the new Alloy 6 features! We had to figure out how to actually use all of those features at scale.
Instead of coming up with a couple of big new techniques over a whole project, it's more like finding a new small technique every day. Like "how to show the exact step in a trace where a property is violated" or "how to find all atoms satisfied
P at any point in the trace" or "how to check a spec against a bunch of different classes of initial configurations."1 It's the kind of workhouse stuff that has the potential to become best practices once I get around to writing it all up.
This aspect seems really tied in with the nature of "being an expert". I couldn't tell you what a C# expert knows about C#, but I can tell you how a C# expert can think about C#.
First thing, there will be a regular newsletter this Thursday, and after that I'll be in Hong Kong for a couple of weeks. Between that and Thanksgiving there won't be any more newsletters until Decemberish.
Once the newsletter resumes it will be back on a weekly update schedule. I've done the 6-a-month experiment for a year now and don't think it's sustainable long term, and it's also made it hard to write anything else. There might still be a surprise newsletter occasionally but I'm not committing to the same pace.