But hey, maybe Johnny Chung's mysterious new role is to do something about that.
A few months ago the Word Lens app for iOS came out, the concept is good, but the execution needs work (it uses an offline dictionary for translation, which tends to suck).
What if he's gone over to Google to work on a Google version of this? He's got lots of experience in computer vision. Or what about a kind of unified Google vision augmented-reality application that does everything?