I usually refrain from copying viral blogsphere memes, but it took me a while to hunt down anything meaningful about this one so I’m noting it here. The supposedly self-evident truth is that word recognition is not based on shape or sequence as usually assumed, but rather mere letter content and the first and last letters. Several tools are available online to demonstrate this.
If you experiment with those tools, though, you’ll find that most of the jumbled results are far more difficult to read than the examples making the rounds. Clearly, these were generated by hand and optimized to maintain readability.
But what additional criteria is it that makes these examples almost as readable as plain text? It seems to me that the unit being mixed up is slightly larger than letters, but maybe not as large as phonemes. Consider the “gh” in “rghit”, “sh” and “ing” in “Elingsh”, and “th” is only sometimes broken up with a single vowel.
Maybe the vowel-consonant pattern is preserved? Maybe shape is mimicked for larger words (“Aoccdrnig”, “uinervtisy”)? Maybe letters are never moved too far? Even if it’s exaggerated for melodrama, at least this one has us thinking.