It's sort of astonishing to think about how much value I've gotten from Twitter over the last few years. At its best, my Twitter feels like spending a leisurely afternoon at a cafe where computing enthusiasts are cycling through and casually discussing the issues of the day. I love how the character limit encourages an unpolished tone and open-ended conversations.
Still, I do find myself wondering whether I'm training my brain to only write in 280-character snippets. And it's a bit strange how anything on Twitter can suddenly spill over the walls of the cafe and become intensely public.
So I'm thinking perhaps this email newsletter will be a way to discuss my research in longer sentences, and to have cozier conversations. Definitely reply and let me know if you have ideas on topics/questions you'd like to see me talk about here.
Amjad Masad, the CEO of Replit 1, recently posed a question on Twitter that I've spent a lot of time thinking about:
What's your theory for why programmers reach for printf-debugging more frequently than other modes (like step-debugging)?— Amjad Masad (@amasad) April 9, 2021
I think it's tempting to answer like this: "Clearly Real Debuggers offer a superior experience to print debugging in so many ways. But print debugging is just easier to get started with, and it reliably works anywhere, so that's why we use print debugging so much." This answer is especially tempting if you (like me) believe that powerful developer tools can make us much more productive and better at programming. Surely the Real Debuggers must be better than primitive print debugging, right?
I don't totally disagree, but I do want to point out that print debugging has one critical feature that most step-based debuggers don't have: you can see program state from multiple time steps all at once. Think about it: with print debugging, you run your program, and your terminal fills with output; you can rapidly skim up and down with your eyes to investigate what your program did as it ran. If the program ran 100 iterations of a loop, you're seeing those 100 iterations exploded in space, all of them simultaneously visible.
Compare this to a step-based debugger: to move forward in time, you have to explicitly press a button, and stepping backwards is (typically) impossible 2. Yes, you have a lot of power and flexibility at any single frozen point in time, but it's hard to get much of a view across the sweep of time.
Personally, I think this difference has a lot to do with why print debugging feels so good. Often when I'm trying to understand what's going on in my code, my first question isn't "what exactly was the state of my code at this exact point?" It's more like "what happened, in general?" I'm trying to answer basic questions like which code ran in what order, or how a piece of state changed over multiple iterations. When I manage to design the right print statement, it feels really good answering these questions with a printed-out trace: no cumbersome stepping, just scrolling up and down a log to figure things out.
I should emphatically mention: I'm not saying that print debugging is the best end state for debugging tools, far from it. I'm just saying that we should reflect deeply on why print debugging is so popular, beyond mere convenience, and incorporate those lessons into new tools.
A couple references on this topic:
If you haven't read them yet, I highly recommend Bret Victor's classic essays Learnable Programming and Up and Down the Ladder of Abstraction, which explore the power of visualizations that abstract over time.
Finally, I've done some explorations into showing the state of a user interface over time using graphs and other data visualizations. How much better could debugging be if we had custom visualizations of our program state over time, instead of just a wall of text?
I think there's a marketing lesson to be learned from all of this too. There's a lot of fancy debugging tech out there, but it's often hard to describe to people. Can we make fancy debuggers that start out feeling like "just a familiar print debugger", but then offer gradually enhanced capabilities?
I've mocked up a "print debugger" where you can simply uncheck print statements to hide them— Geoffrey Litt (@geoffreylitt) April 24, 2020
that's the gateway feature to more powerful things, like retroactively changing your print statements after the program runs (using reverse debugging tech), or outputting visuals pic.twitter.com/SbmOw42cwk
I wrote a blog post recently about the idea of Bring Your Own Client: having the freedom to choose your favorite application to interact with some data. It got more attention than I expected; it feels like there's a lot of hunger out there for organizing technology in a way that gives end users more flexibility and freedom, and less lock-in to big corporations.
The hard question is, though: how do we get there from here? Even if we can invent an architecture where people have more control of their own data, there's a cold start problem: developers need to build apps for that architecture, and users need to join the platform too.
One approach I've been thinking about is going to where the data already lives. Can we start out with existing applications that people already use, and find ways to "liberate" the data inside of them so people have more access and control? This could include getting to choose your preferred application to work with some data, or even writing your own little program to interact with your data.
The hope is that, at first, this would look like something useful but mundane: maybe a "synchronization engine between apps" or a "more convenient wrapper around cloud APIs". But over time, it could grow into a fundamentally different architecture for how we store and work with our data.
That's all quite abstract, so here's a little concrete demo video: what if we could use Trello and Figma to simultaneously edit our product roadmap?
It feels important to emphasize that the point here isn't really these two specific apps or this specific use case. It's more about exploring how when we have two cloud apps synced live to a shared data representation, it starts to feel like the data is more independent of the individual apps. Perhaps by going further down this path, can we incrementally move the locus of control from cloud silos to some other place?
That's it for this inaugural newsletter. If you have any reactions feel free to send a reply, I'd love to hear from you.
If you're not following Replit, they're a fascinating company to watch. They seem to be pulling off the classic disruption playbook—one moment everyone thinks they're making a toy for kids, the next moment they're taking over the world of developer tools. I wouldn't be surprised if they're the next big development and hosting platform. ↩
Even a reversible debugger with a step back button doesn't fully address the issue, if you're still only seeing a single point in time. ↩