A keyboard and mouse are actually gesture recognition devices as well, if you consider it ... they are just mechanical in nature.  Keyboards provide a fixed number of combinations, and the mechanics don't change with context.  The only context is reflected in something like a word processer or IDE with intellisense and autocompletion.  The gestures you make, however, are not contextual at all.  They are still limited by the physical mechanics of the keyboard.  A touch screen with on-screen keyboard can change with context, however.  If you had a tablet for input, and the entire application was for input, then perhaps that may allow the kind of flexibility to provide an IDE with more input capabilities for intellisense and autocompletion, and allow one to switch over to broader manipulation of blocks of text in an easier fashion.  However, I get that touch keyboards have a tactile advantage.