Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements


androidi androidi
  • How long does it take to produce a great song?

    I often heard the story that the best songs happen in just few minutes. Well, in my experience the best ideas may happen in just a happy accident in few seconds. But I came across interesting information from one of the best producers on the planet. It says it took 3 months to make one of his most famous Amiga tracker songs. Since doing that he went to produce some pop music which still gets played every year on the radio despite being 2 decades old now.

    I just wanted to put this information out there incase game producers and publisher need to know how much time they need to allocate for music production if they hire a musician - as much as I hate to say this - it doesn't make sense to hire a musician. It's much better to have a person with ability to see what the game looks like and what it should feel like and have that person go through suitable tracks (I'd say if you want 1 great / unforgettable track you need to listen 1000 tracks, and in my experience it takes about 5 seconds per track for fast paced music, up to minute per track for ambient/classical - I'd reserve couple weeks atleast to get 20 good tracks for a game sorted through from unknown material) for the game and then find the person who made the suitable tracks and commission few more tracks. If each good track is assumed to take couple months to produce, well you can figure out a fair price for couple months of work surely if it's exclusive to the production. Personally I am not sure any musician can hope to make such money until they are famous and some maybe not even then. That's why I postponed my career in music production until such time that I can afford it without needing to actually sell or market anything because I like buying and picking more than selling and marketing. Well - unless the product is so good that it'll sell itself, I do occasional free promotion for other peoples products that I see being worth it. One thing I'm looking out for is GSB2. Back when I promoted GSB1 here I didn't even know it had support for mods, not surprising considering the games developer admitting he didn't know about there being mods either despite having built a mod support. I wish he had made more noise about the mods earlier since the biggest complaint about the game was too little value for the money.

  • [Updated Demo] The Proof is in the Pudding: Here is my Fast TreeListView

    @BitFlipper: Like I said, I have Nvidia gtx760, I did ask what you had incase you didn't see that lag, care to elaborate (cpu,gpu, mb model atleast)? I'm not really into AMD graphics as their reviews are very consistent on issues with drivers (atleast in gaming use, AMD GPUs seem to have focused more on GPGPU in past years). Would be interesting if people reading this and trying that out could elaborate on gpu/cpu details. It could be the difference is either due to the gpu drivers or that if ones cpu is fast enough, then there's some overhead thats "beaten" and the lag goes away.

  • [Updated Demo] The Proof is in the Pudding: Here is my Fast TreeListView


    If this doesn't make it lagging for you then perhaps you could post a system hardware information dump with driver versions as well for reference:

    Take the updated demo2.exe from the OP, expand all nodes, maximize the window (atleast 1600x1200 res), then click and drag the scroll bar at similar speed as if you'd not be interested in reading every line, but you'd be able to notice a line with different color or other visual anomaly or different pattern. That's quite typical use-case when scrolling anything quickly enough to not be able to read them. Taking into account that LCD's begin to blur the text at lower scroll rates than CRT's, perhaps scroll it a bit faster than just at the rate which would blur the text on typical LCD.

    Now, while scrolling at that speed, watch how tighly the visual center-indent on the scroll bar "thumb" which you are moving follows the mouse pointer. If it stays "glued" to the mouse pointer then I'll have to buy some of what you are having because on my system the "thumb" jumps,lags, jerks behind way worse than anything I ever saw on an Amiga 500 where things didn't do any of that. If I scroll this IE8 window on Win7, the center of the "thumb" stays within about 10-15 pixels from the mouse pointer. On the treeview demo the distance grows to 50 pixels from the pointer - essentially meaning the list is that much behind in updating the view vs where the mouse pointer is. And I'd be surprised if we aren't in agreement that if you can scroll smoothly three pages worth of content, then scrolling infinite amount more pages should not be any issue as long as that content is within RAM to a degree that can accommodate the scrolling pace. After all the whole thing should boil down to flipping some buffers and changing the value of a pointer, just like you'd seek or modify the playback rate of audio.

    If the condition for the good performance list/tree view is dependent on "can the items resize", then ideal API would have some viewcontent.CanResize=false; which would guarantee the good scrolling performance.

    I'd be *most* impressed with a demo that generates dynamic color values for every pixel on the screen on the cpu and scrolls those "butter smooth"* across the screen while changing them based on realtime input with next to no latency. That's pretty much what any gfx developer would ideally have for truly fluid experience interacting with the visualized data. * (in my experience you need to increase the process priority, have excellent drivers, and postpone Windows tasks by letting the system know it isn't idle if you are using eg. MIDI controller. And of course some power management stuff may also interfere, which is why audio related advice often says to set Windows to "High performance". Now of course ideally Windows would notice that you are about to scroll the view and would do things to ensure the scrolling (or audio playback/recording, input capturing) can proceed smoothly. There's so many tasks on W7 that run when idle that I had to reschedule ~10 tasks just to avoid unexpected hdd spin-ups..


  • A case for moving desktop motherboards to non-user expandable RAM and adding a user swappable GPU socket

    @cbae: Well if I got to design the PC standards, I'd start from the cooling. I'd want there to be a thermal interface standard for consumer desktop PC that specifies how the heat is moved from the cores to outside the case. One option I'd look at is to flip the CPU and GPU to the bottom of the motherboard, then the outer shell of the PC-case would be in 2 parts, one that goes accross the sides and the bottom, and one across the top,front and back. This bottom part would contain heatpipes and needs to be removed only when swapping gpu/cpu and you'd have to re-apply the paste at that point. It would have screws on the bottom and the X plate at top of the MB. Now with space freed inside the case from the cooling, it's possible to have various configurations from no-psu(external psu brick), to internal psu, to more drives etc, all swappable just by removing the top of the case. And of course you can upgrade the processors without removing anything else.

    edit: Yeah I realize this option has some issues - heat would prefer rise up, and if the mb was mounted top of the case, the top would likely need some kind of dissipation structure which might look a bit silly and gather dust. But the internals could then be entirely fanless atleast.

  • Latency-​aware virtual filesystem with ​deduplicati​on for workstation virtual machines?


    I actually did write how it would be done along with notes that the guest os would need to support it but I deleted that reply after considering that even if it could be done it'd be unlikely.

    However there is another scenario where this is interesting - that is if I want to compile code and then do edits to the code while continuing to run it and then apply those edits with minimal runtime downtime by suspending (after the threads have "arrived" to a hotpatchable state) and applying the changes. The way this would work is that in the running executable, the compiler had already prepared various locations and checks in each thead, that when the "hotpatch signal" goes active,  the running release mode optimized executable will run the threads until the execution arrives at a place where its safe to perform the hotpatching. Now there may be scenarios where this would not be immediately possible and perhaps you'd have to do something in the running app to help it along so all the threads wind up at the hotpatchable state - I suppose for a legacy language like C++, there would be a large amount of cases where the hotpatching would not work unless the project being built was also designed to avoid falling into states where hotpatching won't work. For a modern language/new compiler development (eg. the C# as systems programming language project), its design could already factor these hotpatching challenges somehow to avoid developers falling into the non-hotpatchable state too easily.

  • Steam seems OpenGL bent (of interest to DX/OpenGL devs)

    Perhaps the experiences of some people (one used to work for Epic) who have been building entirely new type of game engine from the ground up and have experience in both DX and OpenGL are worth hearing out too.


    If I look at my favorite games of all time, few used either API exclusively. I read some claims that with Voodoo/Glide some games looked better than with Nvidia (TNT1/2?) though the best looking games tend to be software rendered using custom engine. Carmack had a talk about lighting which shed some light on why that might be. Essentially most 3D games are calculating the light (and final color) in a manner that tends to look rather uncompelling in order to do it quick. - with some post processing shader tweaking and translation tables and such that issue can be remedied to a degree but the result still tends to look more artificial/"plasticky", just bit more aesthetically pleasing. Nvidia had some interesting demos in their recent conference which offered a glimmer of hope of better future but there still was a distinct plastic feel to it at times and you could hear it from the audience response.

  • A case for moving desktop motherboards to non-user expandable RAM and adding a user swappable GPU socket

    @Ion Todirel: "Give 'em enough rope, and they'll hang themselves" & "Scarcity breeds innovation"

    While I've always looked down upon consoles for their controller and games that have tended to be overall less appealing to me than PC games (ME1 was a nice exception), there are some things I think would improve PC's which consoles could have had some control over. Now that consoles have become so much like PC's I'm not sure if this is still the case. I don't own a console but I do follow the hardware developments and major game releases. The console advantages to PC were more control over latency (too bad MS nor Sony put forth any standard for end to end IO latency for consoles which would have included the TV as well) and no installers and no updates and no patches. But that of course changed in last 10 years. So now the only advantage consoles have is that when they're new, there might be a moment where there's no easy cheating hacks available for online games. And how big deal that really is given that the more competitive gaming is on the PC? When MS started talking about Games for Windows Live I hoped it would address these issues on PC but if you look at how it's described it sounds like it addresses a whole bunch of things I didn't know anyone even cared about.

  • How-to: Skip/stop Windows software RAID1 mirror ​resynchroni​zation (when unnecessary)

    @figuerres: I wholly agree. The ideal hack would be to modify the Windows itself such that it doesn't start resyncing drives when there's no need to.

    I looked at what the resyncing for raid1 does with Diskmon. It blindly copies data from one drive in the raid1 set to another. I don't know if modern drives can still have silent bit rot (I know old maxtors did have), if they do, the resync copy op could overwrite good data with bad. I don't know if the sync process switches the source drive on every sync yet or does it always use same source drive.

  • Latency-​aware virtual filesystem with ​deduplicati​on for workstation virtual machines?


    Well I looked around a bit for what the well known vendors have been doing and came up with the following:

    1) Vmware has something called Storage DRS which when combined with dedup sounds a bit like what I described in the op, atleast for the case of applying the scenario to vmware vm's.

    2) Parallels Virtuozzo. Sounds like it allows setting up a bunch of apps inside a "container" to which you can limit hardware access. ThinApp on steroids sort of. I could be entirely wrong though. - Obviously it's not what I described earlier, but depending on what one is doing, it might be good enough.

    3) If MS has solutions to the things discussed above already, I would expect someone here to point them out. I haven't really kept up with MS virtualization stuff.

  • Go to onedrive.com now and you might get 100GB for a year

    Just found this gem from register comments:

    One Drive

    One Drive to rule them all, One Bing to find them,

    One Drive to bring them all and in the data centre bind them.

    (posted by "Number6" http://forums.theregister.co.uk/forum/1/2014/02/20/skydrive_becomes_onedrive/)