I like the SketchInsight demo, but it looks like you have to hook up lots of data beforehand, The actual sketching seems to be nothing more than shorthand/a gesture for "Insert chart". Still cool, but it seems like you'd still need to do loads of work and this is mainly for presentations. And what kind of presentation is that ad hoc that you think "I wish I could quickly show a chart of foo over bar, because I'm only just thinking of that now"?

The Kinect demo was also cool, although I'm wondering if it wouldn't be easier and more intuitive to just touch the screen. I guess I could see this in a scenario with a -massive- screen though, but even then: wouldn't it be easier to just use a secondary device to control the panning and zooming, rather than these gestures?

Personally I saw the most potential in the demo where they used a mix of technologies: touch, a stylus, a phone app that changed control modes depending on how far away from the screen you were, that sort of thing. It all seemed really fluid.