Notes

It’s no surprise that training an LLM on material that demonstrates sound reasoning results in better performance, but I’m fascinated by this identification of “cognitive” behaviors (verification, backtracking, subgoal setting, backward chaining). They sound obvious but I also don’t recognize them from any study of human reasoning: https://nitter.net/gandhikanishk/status/1896988028893323675

“En ik zie alleen maar amateurisme.”

“Minder op Twitter uiten en gewoon op een constructieve manier kijken hoe je het beste doet voor Nederland.”

I love to see this multiparty democracy enforcing a meaningful balance of power. These days in particular.

In the way the dismantling of USAID is a practice run for DOGE dismantling the US government, I’m growing concerned the birthright citizenship attack is just a first step towards weakening the 14th amendment.

I find it worrying how often Github Copilot offers to complete my Rust database access code using .execute(), which is a method on the very popular sqlx crate, which I am not using. It feels like I’m constantly being judged by a dumb popular kid for making a less popular choice (in my current case, cornucopia).

Our LLM friends could really regress us to the mean more subtly than media consolidation already does.

My biggest fear with AI is that we’ve reached a plateau. If we’re stuck with “smart enough to bullshit” but spend a decade struggling to actually automate meaningful solutions, I fear for human culture.