Notes

I am fascinated by prompt hacking that uses emotional appeals to affect the behavior of LLMs. A long-lived LLM might be used to prototype manipulation techniques, perhaps by other models in an adversarial training arrangement.

In any event, many humans may learn social interactions by interacting with them as much as or instead of with peers. I’m not sure if that’s troubling or promising!

It is both obvious and deeply unsettling that AI will serve as a channel for our biases. What really concerns me is internet-scale models may homogenize and amplify our cultural flaws to a degree that will make Facebook seem like a tiny harbinger.

If the EU starts regulating attention-manipulating design we are going to end up with a fascinating arms race between dark pattern innovation and regulatory agility.

Why is it so easily accepted that pricing is limited by “what the market will bear” but scale is not?

Is anyone using AI to make graphic games out of the old Infocom text adventures?

This exhaustive review of paper-based microfluidics technology has me thinking about how interesting the biosensing industry is becoming (and where investment opportunities lie). Also, though, it made me curious about laser-cutting microfluidic logic gates, which has led me to rediscover liquid computing, bubble logic, and the Center for Bits and Atoms.

I have a (poorly-informed) sense that people studying this space end up in exciting research about metamaterials and other exotic, quantum physics. I wish there was more hacking around with things an average person can understand, which can have educational merit and feels more likely to change the world at scale.