Leaving a note here so I can say “I was right” later: by the time I retire, it will seem quaint that we thought global economics could be understood as the US vs China, ignoring the self-determination of India and African societies.
I knew the Apple Vision Pro’s EyeSight looked familiar.
I am fascinated by prompt hacking that uses emotional appeals to affect the behavior of LLMs. A long-lived LLM might be used to prototype manipulation techniques, perhaps by other models in an adversarial training arrangement.
In any event, many humans may learn social interactions by interacting with them as much as or instead of with peers. I’m not sure if that’s troubling or promising!
It is both obvious and deeply unsettling that AI will serve as a channel for our biases. What really concerns me is internet-scale models may homogenize and amplify our cultural flaws to a degree that will make Facebook seem like a tiny harbinger.
If the EU starts regulating attention-manipulating design we are going to end up with a fascinating arms race between dark pattern innovation and regulatory agility.
Why is it so easily accepted that pricing is limited by “what the market will bear” but scale is not?