Reacting to: Interactive explanations

Look, I get it. Agents can spit out code fast, but half the time you end up staring at a blob you don’t truly get. Simon’s idea of paying down “cognitive debt” with interactive explanations hits hard. Static docs and linear walkthroughs are fine, but nothing punches clarity into your brain like watching an algorithm move.

This is the part I like: he didn’t just accept “Archimedean spiral” as a magic phrase. He forced the model to show its work with an animated explainer. That’s the right instinct. I’ve seen too many AI-built features that are black boxes with a green checkmark. It works today, then you’re stuck tomorrow when you need to tweak the behavior and you can’t explain why the box does what it does. Not ideal.

It also maps cleanly to what we’re doing with FrameFlow. If we’re asking users to build scroll-driven stories, we should be dogfooding these explorable explanations ourselves. Visualize the pipeline, animate the flow of assets, make the weird parts legible. Otherwise we’re just shipping more magic and calling it “intuitive.” It isn’t.

Original post: https://simonwillison.net/guides/agentic-engineering-patterns/interactive-explanations/

P.S. If every AI tool shipped with a “show me the moving parts” mode, half the fear would evaporate.

Was this useful?

// Comments

No comments yet.

← Back to blog