Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Jhaks

Jhaks Jhaks

Niner since 2006

  • TechFest - Lie Lu and Frank Seide - Music Steering Project

    heatlesssun wrote:
    Wouldn't think it would be anything different than the way lists are shared now.  Once the list is created just share it.  I'm I missing something?


    Uh... this isn't really about sharing playlists.  It is about putting context and information onto your music programmaticly and being able to use this rich meta data to navigate and select music in a more user centric way.  This way when you listen to a song and want to listen to similar songs the device or computer has information about what that song is like and can pick songs that are similar.
  • Chris Wilson: Inside IE8 Beta 1 For Developers

    Hakime wrote:
    Webslices a new feature? You are kidding, right? I mean it would be nice that the folks of channel9 just go out a little to see what is going on. Webslices is nothing more than a ripp off of Webclip, a feature introduced in Safari for Mac OS 10.5. At least IE folks could be honest and clearly admit that they took the idea from Safari and they ripped it off. That won't kill them....


    I think there is are a lot of differences between the two features.  They both let you view partial contents of a page but the usage and implementation are very different.

    Web clips in OSX are widgets not browser items and if I'm not mistaken it loads the entire web page when you look at the widget but clips out the portion of the web page you want to see which you have to manually set.  This doesn't really take into account if the page changes layout or that you really shouldn't have to load or grab the entire page to see only part of it.

    Webslices are provided by web pages (not users as in the case of webclips) that want provide quick views of parts of their site.  To create a web slice you annotate the section of your page you want to be a slice with some tags and properties.  When a user goes to the site there will be a hover button near the webslice that a user clicks to add the slice.  A link is then added to the links bar and whenever the content is updated the link title will be bolded like an RSS feed.  When you click the link only the content for the webslice is loaded and it doesn't matter if the page layout changes since the specific content is directly annotated to be a slice.  For example if I want to get updates on four ebay auctions I can easily do it within a few seconds.  Not to mention I will know when the auctions change without having to check the webslice.  With clips it would take me much longer to create the widgets, I would have to check them to see if they update and if the page changed I would have to re-adjust the clip region.  Web clips are not dynamic and too manual to be useful for a wide verity of content and a lot of users have mentioned this after using it for a while.

    Edit:
    Oops.  Nidonocu, looks like you said what I was about to say.  My rendition might be a little verbose though.  Anyways Webslices and activities look like really useful features.
  • Microsoft Research TechFest - Using P2P to speed up multiplayer gaming (and other things)

    Chadk wrote:
    
    Btw. we already have MASSIVE multiplayer games. Eve online have up to 33k players online at peak times. Its really amazing.

    But if objects are managed by the conneted peers, wouldnt there be a potetional problem where the unit would change the actual location of the object that it manage, giving the player an unfair advantage?



    These massive multiplayer games aren't really massive because the players do not coexist at the same time in the same place.  It works because players don't reside in the same local place and also because the game root servers are run on large server farms that have much higher bandwidth than end systems.  With shooter games one of the clients acts as the server which means the server's processing power and bandwidth are the bottleneck.  Shooters can also use p2p like in the video, but then again the bottleneck is the upload of each peer and the amount of coexisting players.

    The problem with inconsistency came to my mind too.  It seems the focus groups does a good job fixing the problem by allocating more bandwidth to characters that are in focus.
  • Microsoft Research TechFest - Using P2P to speed up multiplayer gaming (and other things)

    prencher wrote:
    
    The download works perfectly, its the streaming one thats barfed.

    Excellent interview, though what about NAT related P2P issues? Most people don't have inbound ports opened, so how do you get around that?


    Haza!  That's where Vista can comes in with it's Teredo technology, which is an ipv4 to ipv6 transition technology.  It allows for automatic NAT traversal if you are hooked up to an IPv4 gateway.  If you have Vista you can try this out in the Window Meeting Space application.  It's a p2p app that uses teredo and lets you transfer files and also share desktops directly without needing to configure the NAT.

    I thought the focus groups where really smart.  People complain a lot that they're getting killed because their connection sucks.  If the bots are utilized to much people might say the game is somewhat unfair.  The focus group is a great way to fix the problem. 

    I just though of something that could apply to distributed gaming too.  In my networks course we were talking about multicasting that is in development for applications like IPTV.  With something like IPTV with traditional unicasting the TV server would need to send the video to each and every client.  If there are millions of clients, the server upload bandwidth quickly becomes the bottleneck.  One solution is IP Multicasting where there is packet replication and branching at the network level.  Essentially a tree is constructed where the server is the root, internal nodes are routers and the leafs are clients.  The root server would send video to a few routers, these routers would further send to other routers and eventually this branching tree would get to the client.  More packet duplication occurs starting the periphery of the network so the overall bandwidth is more evenly distributed.  I can see this applying to multiplayer games really well since it seems the upload capacity is the bottleneck.  If a player is sending state update info to everyone there is a large unessecary duplication of packets at the source when this duplication could be done closer to the clients at different routers.  Combining this with the type of technology mentioned in the video  it would seem you could have even larger sets of coexisting characters while also relying less on bots.  Too bad multicasting seems so far away in terms of depoloyment.  Well at least people can look forward to the tech mentioned in the video.
  • Windows Vista "Time Warp": Understanding Vista's Backup and Restore Technologies

    I'm not completely sure but I think there is a company that has created software that cleans out previous versions.
  • New Vista GUI Stuff For Devs

    DigitalDud wrote:
    The problem I see with this compositing technology is it creates a large problem in order to solve a very small problem.  You now need entirely new display drivers throwing out the old ones that have had years to stabilize, application compatibility takes a hit especially screen readers, and you need a complex and potentially unstable system to actually do the compositing.  All of these problems just to fix the issue of rare drawing artifacts and add a bit of eye candy.  It doesn't seem worth it.

    Hopefully Microsoft has bigger plans for DWM 2.0 and something more intuitive than the infamous Flip 3D.


    It doesn't seem like WDM should be much more complex than the old method.  In fact it should provide a more stable and robust system.  I wasn't sure what you meant by screen readers but I'm guessing you mean screen capturing software.  The current screen capturing software (especially video) won't work fantastically b/c it wasn't designed for WDM (athough... they never worked that well in the first place).  However capturing screens of the desktop shouldn't be too hard in Vista since it's all rendered in 3D.

    Using hardware accelerated graphics opens up many possibilities that are otherwise very slow or impossible to implement.  Occlusion, transparency and animations are all easily done very quickly through hardware.  The point is that, many computers now have decent graphics cards that are severely under utilized.  With WDM and WPF the entire computer is utilized providing performance increases, and of course it provides a lot of graphical freedom for applications.

    Compatibility is always an issue transitioning between technologies but it shouldn't stop new innovations from being developed.  Plus these transition problems are very short lived.
  • Lee Bandy on IPv6

    rcs wrote:
    
    IRenderable wrote: Do we really need IPV6? Sure alot of computers are out there but most of them are behind routers so there is only one IP address per router (If I am understanding and it's very likely I am not) and because of that you can cut down the number of IP addresses by thousands.


    Yes, we really do! IPV4 has a maximum of something like 4.2 biliion addresses (after you take out unusable addresses). If you just consider the publicly exposed addresses, that is still a dangerously high number of addresses that are used - and worse, there is a lot of extra obscure/clever/confusing ways to break up those subnets. So, we are running out of addresses, we are resigned to using wacky methods to make the addresses we do have, usable in our environments..

    Meanwhile, IPV6 has a maximum of something like 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses.

    Seems like changing over is a small price to pay to "never" having to worry about IP addresses allotment again, I think!


    To add to this "non-home" routers have many network interfaces and each one of these has an ip address which contribute to the depletion of addresses, not to mention in the future all devices may have network capabilities.  Another reason for switching to ipv6 is because nats inherently break the design of the internet (every interface was ment to have an address).  That's why people outside the nat can't initiate contact with people inside without some kind of hack and middle man server. (you could see this as a security risk... but firewalls can do a pretty good job and still retain flexibility)  Also ipv6 is optimized for performance gains and adds some extensibility over ipv4.
  • Lee Bandy on IPv6

    ZippyV wrote:
    When are we supposed to switch to ipv6?


    There is no specific time for all devices to switch an ipv6 network layer, which is impossible considering the scale of the internet.

    Also Vista natively uses ipv6, so if you use Vista you "switch".

    ipv4 will slowly be phased out by allowing both v6 and v4 to work in conjunction (like Lee mentioned through tunneling and other technologies).

    Even though it'll take a while to phase out ipv4 it's great that Microsoft is pushing ipv6 to speed things up.