1 minute ago, AndyC wrote

*snip*

You see, you're missing the subtleties. Let's say I'm holding a tablet in my left hand and using the stylus in my right. I swipe from the corner to bring up the switch list and want to swap to an app in the list near to where my left hand is resting. It's much, much easier to use my thumb to click than to contort your arms to do it with the stylus. Except now you've put these close buttons there and we're back to accidental closures. And that's assuming you're using a tablet-style direct-to-display stylus, as opposed to a separate desktop one where there is a visual disconnect - you're now forcing the user to employ a degree of accuracy otherwise unnecessary to switch tasks (the Taskbar doesn't do that, for example)

The close buttons wouldn't be present since you invoked the menu with touch instead of the stylus. The only person "forcing" the user to do anything here is you.  It's about options. I've got a Wacom Bamboo and I'd never use it as a substitute for a mouse. And you claim I move the goalposts?

Similarly with the mouse with touch, the whole point is to allow quick gestures which can bring up system or app UI, but your hand is still on the mouse and you still might want to use that to actually point/select. Except now you've made the close function you wanted more accessible disappear, so the user either has to give up the touch gestures or go back to the right-click menu that is apparently too difficult. More exotic devices like Kinect only increase the number of UI combinations, none of which are necessarily going to be used in isolation any more than you'd use the mouse in isolation from the keyboard.

Can you rub your head and belly at the same time too? Many can't. Same goes for using touch features on a mouse while you're moving it. The "x" appears based on what input mode invoked the taskbar. End of story. Kinect again is a non-issue as it doesn't cause the "x" to be presented since it's treated the same as touch.

And it just doesn't do it well. Trying to fudge touch support onto a UI designed wholly around the idea of being mouse driven is inevitably going to fail. That's kind of the whole reason why Windows 7 touch devices aren't ubiquitous already. It's why Steven Sinofsky said in the Building 8 blog all those months ago: "Going back to even the first public demonstrations of Windows 7, we worked hard on touch, but our approach to implementing touch as just an adjunct to existing Windows desktop software didn't work very well. Adding touch on top of UI paradigms designed for mouse and keyboard held the experience back."

So you take a wholly general statement and throw out absolutely everything they did on the desktop for touch and the improvements they have made to it in W8. Better tell that to the Office team being they are hell bent on making Office more touch friendly on the desktop.

I have absolutely no idea what point you're trying to make with that.

Just trying to pre-empt any Fitts law excuses you have about the difference in orientation between the W8 task bars...

Let's throw down a few more asinine use-cases shall we?

My mouse with touch has low battery. It only has enough juice to power the touchpad. Should the "x" be displayed?

I'm in the bathroom on the toilet with my tablet in my left hand and my stylus in my right. What hand do I wipe with and which do I swipe with? And where should I put the stylus? Maybe we should invoke Shitts Law here...

I just picked my nose increasing the surface area of my touch point. According to Fitts law the thumbnails are now too small to accurately hit. Should they scale based on touch point size? Keep in mind I'm looking at pr0n so my other hand is busy...