Notes
Open Interpreter’s Local Model Computer protocol feels very important. Apple and Microsoft have struggled to bootstrap ubiquitous scripting APIs despite there being obvious user benefits. And not just to nerds; the Kin and Newton (and, farther back, BeOS) showed the tremendous UX potential of a UX-strong OS. They also demonstrated how difficult it is to sustain an ecosystem of developers willing to enable it.
But if users are finding LLM chatbots useful, and they can in turn use our computers for us, that might break a chicken-and-egg stalemate.
The strategy of empowering a nitwit despot just to get SCOTUS stacked in their favor really paid off for the antifederalist anarcho-capitalists today.
Just because I think the tech is revolutionary doesn’t mean I disagree that most of today’s products are bullshit: https://www.wheresyoured.at/expectations-versus-reality/
I hope something comes of this. Marking up LLM output to explicitly quite source material, so even if the model hallucinates the reader can discern: https://mattyyeung.github.io/deterministic-quoting
A parable of our decade in two pieces of nearly-identical hardware:
In 2012 we had the Descriptive Camera using Mechanical Turk, as a thought provoking student project and a blog post.
In 2024 we get the Poetry Camera using AI, with a slick website and signups for buying one.
I see a lot of experts-of-the-day gnashing teeth over @rabbit@threads.net being “exposed” for using puppeted Android apps in the cloud.
What did you all think they were using? Hand-wavy AI magic you’ve been pretending to understand? Some secret Android API that app developers have been quietly implementing just for them?
This is the obvious way to pull off what they’re doing with today’s ecosystem. It’s funky and pushing the envelope in ways that deserve scrutiny for security and privacy, but isn’t this what “innovation” is supposed to look like?