I was flipping through Marc’s copy of Tog on Software Design and was intrigued by the “SunPad” in Tognazzini’s description of Sun’s Starfire film (which itself is very reminiscient of Apple’s Knowledge Navigator concept):
She looks at the SunPad display, which now has the camera controls slaved to it. It’s as if the SunPad were a large camera viewfinder. She presses the button which turns on the pad, then zooms out using the zoom buttons as she begins to move the pad (and boom).
Today, we can readily produce a handheld screen with an accelerometer for position sensing. As a bonus, we can even have a camera embedded on the backside. So when do I get to use one to “scan” a scene or object and package the imagery for playback later (or elsewhere)?