How Will ”Privacy“ Limit Apple?
Apple’s been creeping towards privacy and encryption as a differentiator for some time, but last week’s address to EPIC was explicit. Tim Cook accused their neighbors of “lulling their customers into complacency about their personal information” and the digerati have taken note.
The result is some long-overdue public analysis of Google’s learn-everything-and-personalize strategy versus Apple’s. A false dilemma has emerged, driving concern that Apple will be obsessed with privacy at the risk of product quality. Dustin Curtis offers a thoughtful example of this narrative, where he has wisely reframed from privacy to security and pointed out that Google’s explicit sale is of user attention rather then data. Representative of the zeitgeist, he worries about “vast improvements in user experience” that Google’s aggregation of user data does or will enable.
So how might Apple’s stance prevent it from providing users the best experience?
Google can likely target more precisely by sniffing communications, but at risk of crossing the creepy line. As seen with Netflix and Amazon, users tend to prefer a clear, explicit relationship between observation and recommendation.
Usage data for product development is increasingly important. Third-party developers and Apple themselves benefit when users opt-in to diagnostic sharing, but a privacy-steward filter on what they’re willing to collect could slow discovery of product flaws and opportunities. So far, Apple’s designers have been savvy enough, but I often find myself hoping they’re paying close attention when I give up on Apple Maps and switch to Google’s.
Contextual interactions as seen with Google Now are certainly easier to design if you assume a social contract with users to use their every move in any algorithm, but Proactive may be a nuanced approach that makes the sharing contract explicit. Google’s all-knowing-cloud approach is respected, but software agents don’t have to be monolithic. Even if Apple doesn’t know where I have been today, my instance of Siri might.
Population-scale learning simply requires mass data collection. As Nick Heer points out, Apple’s stance will certainly make it more difficult for them to develop a cat-recognition algorithm. But ReseachKit shows that they understand the power. For now, at least, it seems they would rather mediate a trust relationship with third parties for sensitive services like health and money, even if that means they pass up opportunities for keeping the learnings as their own secret. It’s conceivable they could do the same for communications. (It could be argued they already have, by introducing a popular phone and allowing third parties to host messaging platforms on it.)
Each of these are certainly easier to understand with Google’s monolithic approach, but I haven’t yet seen a compelling argument that Apple is completely passing on any UX opportunities by declining to aggregate my private data.