Notes

These short posts include "asides" from the my original blogs, a sample of "tweets" from Twitter, and more recently are often syndicated to my Fediverse profile.

Today in “things my coding model said”:

This is a great example of premature optimization - the debouncing added complexity to solve a performance problem that doesn’t actually exist at the scale we’re operating.

(It wrote the debouncing, until I asked it to simplify.)

As I enjoy watching models explain their reasoning through work I’ve given them, I keep thinking about the recent paper _Chain-of-Thought Is Not Explainability and how much we should remind people that rationalization is not always insight:

verbalised chains are frequently unfaithful, diverging from the true hidden computations that drive a model’s predictions, and giving an incorrect picture of how models arrive at conclusions

My AI coding agent has been well coached. Describing a few discarded options for a caching architecture:

Redis External Cache: Adds infrastructure complexity (against smol tech philosophy)

Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. – Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task

A lot of enterprise software design has always been serving users that work on behalf of others (customers, managers, etc.). It seems we should be able to transfer some of that know-how to designing for AI agents using software on behalf of users.

(No, I don’t know how, exactly.)

I recently let Cline (I don’t remember which model) do a refactor that seemed quite harmless. Especially in Rust.

But days later, I’m trying to figure out a bug and find that a large bit of code had been replaced with

// ...rest of the function implementation...

🤦