Notes
A lot of enterprise software design has always been serving users that work on behalf of others (customers, managers, etc.). It seems we should be able to transfer some of that know-how to designing for AI agents using software on behalf of users.
(No, I don’t know how, exactly.)
I recently let Cline (I don’t remember which model) do a refactor that seemed quite harmless. Especially in Rust.
But days later, I’m trying to figure out a bug and find that a large bit of code had been replaced with
// ...rest of the function implementation...
🤦
It’s no surprise that training an LLM on material that demonstrates sound reasoning results in better performance, but I’m fascinated by this identification of “cognitive” behaviors (verification, backtracking, subgoal setting, backward chaining). They sound obvious but I also don’t recognize them from any study of human reasoning: https://nitter.net/gandhikanishk/status/1896988028893323675
“En ik zie alleen maar amateurisme.”
“Minder op Twitter uiten en gewoon op een constructieve manier kijken hoe je het beste doet voor Nederland.”
I love to see this multiparty democracy enforcing a meaningful balance of power. These days in particular.
In the way the dismantling of USAID is a practice run for DOGE dismantling the US government, I’m growing concerned the birthright citizenship attack is just a first step towards weakening the 14th amendment.
If we don’t purposefully raise AI, the internet will do what it will: https://www.nature.com/articles/s41599-025-04465-z