Ok, we had the infinite. What's the new VS logo supposed to mean?
Metro is heard for a long time, I just wonder why noone happen to sell LCDs/LEDs with touch built-in. This would be popular for even Win7 users. And from what I saw 5 years ago in ShenZhen the parts are not very expensive (like a few hundred CNY or something less than 80 USD for 15-inch display, and that supports single touch point though), why noone try to do that?
That's because single point touch doesn't work very well (and if it's a resistive touch you are thinking of, then it just plain sucks) unless you apply it to applications that were specifically written around its limitations. Most interesting gestures require multiple fingers and, without that, you don't get a good experience enough to compensate the lack of hovering and a lazy right click.
May 28, 2012 at 11:09 AM
I'm no expert in this, but I think reference counting is also more predictable (latency wise) making it superior memory management method for real time applications. GC performance depends on the GC algorithm, but I never recall seeing a GC that was conclusively proven to be faster than reference counting. One thing GC does have that reference counting does not is the ability to handle circular references though.
The performance of reference counting is pretty much constant with respect to the amount of available memory. Conversely, garbage collectors will improve steadily as you add more memory (interesting special case when the amount of memory is infinite). So, if GC's aren't ahead right now, they will eventually be.
May 22, 2012 at 2:43 PM
I believe that needs to be put in the right context. Garbage collection can be more performant than reference counting, so if you are designing a language that relies heavily on managed memory, a GC is the way to go. But there's the rub... C++ offers you options that allow you to express your code without using reference counting at all, possibly at the expense of memory safety.
Take for instance an object that is not meant to survive the scope it's allocated in. Assuming it's a reference type, in C# it gets invariably allocated on the managed heap and you incur collection costs. In C++, you could use a unique_ptr, or even just allocate it on the stack. Sure, that's not foolproof, but the gains in performance and memory pressure are significant.
That's where a smarter compiler could really help: in C++ it could be more aggressive in detecting dangerous situations; in C#, it could detect cases in which the lifetime of an object can be safely determined at compile time and get them out of the hair of the GC (and possibly onto the stack).
This is just the tip of the iceberg, but I already rambled enough.
May 19, 2012 at 6:58 PM
Awesome. There's less and less reason to have a CLR or VM if you have a compiler that can take in various languages and target various architectures just as well - and better. There's a session on auto vectorization that was just live today ... I only caught a part of it and plan to check it out this weekend.
Someone correct me if I'm wrong here, but if you have deterministic finalization that works with move semantics as you have with C++ 11, which allows you to avoid falling back to raw pointers and having to manage memory outside of destructors etc..., then performant C++ and C# become quite similar. The compiler can make the C# behave as it would if it were managed code (GC wise ... ignoring CAS or other CLR services), but better since there's true deterministic finalization. Since older C++ compilers didn't have move semantics for reference types, then trying to get C# code to compile down to native would result in copy semantics and horrible performance.
As I see it, the challenge to get C# get deterministic finalization is to make it able to break out of its dependence on the GC. Extending the language would just beef up the unsafe part of the language; what would be really cool would be to get the compiler spot by usage when a reference can be (safely and verifably) converted into an unique_ptr, a shared_ptr, or even just allocate it on the stack.
The down side of any major changes in the visual aspect of Windows is that some applications mix standard controls with self-drawn ones. Right now the differences are minimal, so they don't show, but I'm afraid it will be a while before we get visual consistency again.
@DeathByVisualStudio: Luckily, I still have a few inches left in my arms before I really need reading glasses, but I have been thinking about this for a while.
How much does that procedure affect your binocular vision? I don't mind 3D movies anyway, but I was under the impression that would interfere with my driving. How's your experience with that?