This is a really busy week for me, so light newsletter this time. Let's start with obligatory stuff and then get into fun newsletter stuff.
So first off, I just finished the May TLA+ workshop. This is the first time I felt really happy with the material and that I won't have to do a week of revisions before running it again. Speaking of which, the last workshop (for now) is June 12. Due to a couple of surprise expenses I'm trying to get a bigger attendance for this one, so I'm dropping the price from $749 to $649. Use the code
C0MPUT3RTHINGS for an additional 10% off.
Next, I'm speaking at GOTO Chicago next week. My talk, Is Software Engineering Real Engineering, compares and contrasts software from "real" engineering, based on interviews with 17 people who've done both. Turns out that I have a 10% discount coupon,
hillel10, which takes the conference from $2395 to "merely" $2155.
(Obviously I want people to attend my talk, but it's cheaper to send four people to my workshop than pay full price for one conference ticket. Food for thought!)
Last, I finally have a social media again: Bluesky! I'm
@hillelwayne.com. Time will tell if this ends up being a good techie space, but if it is, I'll probably go back to 4x a month newsletter updates.
All of this is subject to change due to either external or internal factors.
Okay, that's all the planning to get done. Time for something actually fun:
Floating point is a famous "leaky abstraction":
>> 0.1+0.2 0.30000000000000004
But here's a more obscure leak:
>> -1/Infinity -0 // not 0!
This is called a signed zero and is part of the IEEE-754 floating point standard. From the 2008 copy:
2.1.25 floating-point number: A finite or infinite number that is representable in a floating-point format. [...] All floating-point numbers, including zeros and infinities, are signed.
The standard further mandates that some functions must behave differently depending on the sign of 0:
>> Math.atan2(0, -2) 3.141592653589793 >> Math.atan2(-0, -2) -3.141592653589793
Mostly these are for specific situations not seen in day to day programming. Why have it? Whenever I think "wow FP is crazy" I think about this oral history by William Kahan, the chief architect of the IEEE 754 standard. For all the weirdness of 754, it had vastly fewer footguns than all the other numerical formats people used at the time. For example, it handled gradual underflow better:
Until the 1980s, almost all computers flushed underflows to zero and almost all programmers ignored them. In fact, Crays had no practical way to detect them and VAXs went likewise by default. Software users had no recourse but to choose different data if underflow caused their software to malfunction and they noticed it. Noticed malfunctions were uncommon, and most of these could be traced to a chasm poked into the number system by flushing numbers smaller than the underflow threshold to zero. This minuscule chasm between zero and the smallest positive normalized floating-point number yawned many orders of magnitude wider than the gaps between adjacent slightly larger floating-point numbers. An experienced numerical analyst could find ways around the chasm but naive programmers learned about it the hard way.
This kind of thing can happen when you're doing a computation where intermediate numbers are significantly larger/smaller than the end result. Consider taking the product of a 100-element array, where half the elements are
10 and half are
0.1. While the end result is going to be ≈1, depending on the ordering the intermediate product can go as high as 1e50 and as low as 1e-50. According according to this page, the smallest 64-bit number in the VAX format is 2.9e-39, so 1e-50 would get rounded to 0, and the whole computation would return 0. IEEE 754 doubles can go as low as 5e-324, so you're less likely to get an underflow.
When underflows do happen, though, you can tell "from which direction" you underflowed by the sign of the 0. According to John Cook:
If a positive quantity underflows to zero, it becomes +0. And if a negative quantity underflows to zero, it becomes -0. You could think of +0 (respectively, -0) as the bit pattern for a positive (negative) number too small to represent.
Numerical analysis is crazy.
In researching this, I found another paper by Kahan: How Futile are Mindless Assessments of Roundoff in Floating-Point Computation. Unrelatedly, there is a lot of drama between him and the unum guy.
I realize there's a conflict of interest in telling people they should hire me while also telling them that if nobody hires me, everyone gets more free content. ↩