Mostly announcements and plans but also some fun floating point trivia
This is a really busy week for me, so light newsletter this time. Let's start with obligatory stuff and then get into fun newsletter stuff.
Announcements
So first off, I just finished the May TLA+ workshop. This is the first time I felt really happy with the material and that I won't have to do a week of revisions before running it again. Speaking of which, the last workshop (for now) is June 12. Due to a couple of surprise expenses I'm trying to get a bigger attendance for this one, so I'm dropping the price from $749 to $649. Use the code C0MPUT3RTHINGS
for an additional 10% off.
Next, I'm speaking at GOTO Chicago next week. My talk, Is Software Engineering Real Engineering, compares and contrasts software from "real" engineering, based on interviews with 17 people who've done both. Turns out that I have a 10% discount coupon, hillel10
, which takes the conference from $2395 to "merely" $2155.
(Obviously I want people to attend my talk, but it's cheaper to send four people to my workshop than pay full price for one conference ticket. Food for thought!)
Last, I finally have a social media again: Bluesky! I'm @hillelwayne.com
. Time will tell if this ends up being a good techie space, but if it is, I'll probably go back to 4x a month newsletter updates.
Plans
Once the talk and June workshop end, my schedule opens for a while. Obviously this is subject to change if I get any consulting gigs!^{1} Here's what's currently on my radar:
 Alloydocs: haven't updated these since 2020, which means all the Alloy 6 features (time, time, and time) aren't present yet. I promised Daniel Jackson I'd have it updated before the end of June, or else I'll donate $1000 to the GOP. And I promised myself that if I do get it done in time, I'm buying new microscope lenses.
 Logic for programmers: I got feedback on draft 0 which makes me want to redo the structure. Hoping to have a first draft by the end of the summer. Maybe. I know this one keeps getting delayed, writing a book without a strict deadline is really hard. Maybe I'll do a toxx clause for this too.
 learntla: I've been making small updates since I released it in 2022, but there's still more I want to cover. At the very least, I want to start adding examples, properly cover refinement, and write topics on optimization and debugging errors. Lower priority than the other two for now.
 Misc stuff that's unlikely to happen this summer: A new journalism project I'm really excited about but still need to lay the groundwork. Updating the Alloy workshop. Finding Excel consulting.
All of this is subject to change due to either external or internal factors.
Okay, that's all the planning to get done. Time for something actually fun:
0
Floating point is a famous "leaky abstraction":
>> 0.1+0.2
0.30000000000000004
But here's a more obscure leak:
>> 1/Infinity
0 // not 0!
This is called a signed zero and is part of the IEEE754 floating point standard. From the 2008 copy:
2.1.25 floatingpoint number: A finite or infinite number that is representable in a floatingpoint format. [...] All floatingpoint numbers, including zeros and infinities, are signed.
The standard further mandates that some functions must behave differently depending on the sign of 0:
>> Math.atan2(0, 2)
3.141592653589793
>> Math.atan2(0, 2)
3.141592653589793
Mostly these are for specific situations not seen in day to day programming. Why have it? Whenever I think "wow FP is crazy" I think about this oral history by William Kahan, the chief architect of the IEEE 754 standard. For all the weirdness of 754, it had vastly fewer footguns than all the other numerical formats people used at the time. For example, it handled gradual underflow better:
Until the 1980s, almost all computers flushed underflows to zero and almost all programmers ignored them. In fact, Crays had no practical way to detect them and VAXs went likewise by default. Software users had no recourse but to choose different data if underflow caused their software to malfunction and they noticed it. Noticed malfunctions were uncommon, and most of these could be traced to a chasm poked into the number system by flushing numbers smaller than the underflow threshold to zero. This minuscule chasm between zero and the smallest positive normalized floatingpoint number yawned many orders of magnitude wider than the gaps between adjacent slightly larger floatingpoint numbers. An experienced numerical analyst could find ways around the chasm but naive programmers learned about it the hard way.
This kind of thing can happen when you're doing a computation where intermediate numbers are significantly larger/smaller than the end result. Consider taking the product of a 100element array, where half the elements are 10
and half are 0.1
. While the end result is going to be ≈1, depending on the ordering the intermediate product can go as high as 1e50 and as low as 1e50. According according to this page, the smallest 64bit number in the VAX format is 2.9e39, so 1e50 would get rounded to 0, and the whole computation would return 0. IEEE 754 doubles can go as low as 5e324, so you're less likely to get an underflow.
When underflows do happen, though, you can tell "from which direction" you underflowed by the sign of the 0. According to John Cook:
If a positive quantity underflows to zero, it becomes +0. And if a negative quantity underflows to zero, it becomes 0. You could think of +0 (respectively, 0) as the bit pattern for a positive (negative) number too small to represent.
Numerical analysis is crazy.
In researching this, I found another paper by Kahan: How Futile are Mindless Assessments of Roundoff in FloatingPoint Computation. Unrelatedly, there is a lot of drama between him and the unum guy.

I realize there's a conflict of interest in telling people they should hire me while also telling them that if nobody hires me, everyone gets more free content. ↩
If you're reading this on the web, you can subscribe here. Updates are once a week. My main website is here.