Notes
If we expect to treat AI agents like people, why wouldn’t we expect to organize them similarly?
Back in 2011 when Microsoft demoed them, I didn’t see that communication avatars (sorry, “Personas”) would be useful for AR; strapping an iPad to my face was a distant dream. I think I still believe the real value will be in facing emotional states, though. https://twitter.com/gerwitz/status/84311406148718593
If the feedback you give is always critical or always affirmative, you’re probably not being constructive.
Maybe we don’t only need conscience models but also doubt models.
I love this use of LLMs, translating the raw material of government to make it understandable: https://whatsard.co.uk/
At this point, it’s boring to observe there exists a West Coast techno-utopian culture bubble where people think advanced AI is the biggest threat we should worry about, the treatment for hatred is more free speech, meritocracy will address inequity, etc.
It’s strange to have left it and observe from outside. I actually have some nostalgia, and sometimes roll my eyes with a subtle pang of yearning for the comforting embrace of that myopia.