Notes
Rendering text has been a big challenge for the image generation models (which work very differently than the GPT language models), but it looks like that’s temporary: https://github.com/tyxsspa/AnyText
We’ve scaled up the capabilities of individual human intelligence by forming teams, it seems obvious this will be a way forward with AI: https://arstechnica.com/science/2023/12/large-language-models-can-figure-out-how-to-do-chemistry/
Finally, an LLM that works in SVG: https://arxiv.org/abs/2312.11556
I am fascinated by prompt hacking that uses emotional appeals to affect the behavior of LLMs. A long-lived LLM might be used to prototype manipulation techniques, perhaps by other models in an adversarial training arrangement.
In any event, many humans may learn social interactions by interacting with them as much as or instead of with peers. I’m not sure if that’s troubling or promising!
It is both obvious and deeply unsettling that AI will serve as a channel for our biases. What really concerns me is internet-scale models may homogenize and amplify our cultural flaws to a degree that will make Facebook seem like a tiny harbinger.
If the EU starts regulating attention-manipulating design we are going to end up with a fascinating arms race between dark pattern innovation and regulatory agility.