Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Discussions

androidi androidi
  • and What's up with the trackpad to visible pointer movement latency?

    I was visiting a brick and mortar store and tried the trackpad on a bunch of the mid range laptops (400-900$). I was "WTH" at how much latency there was from the trackpad input to visible pointer movement. And that's before the "WTF" of most of them having Intel N#### CPU's with worse benchs than my 10 year old laptop. And plain ol HDD to boot - upgrading old laptop with ssd+msata-pata converter for <$100 seems to make more sense than buying these. It could all be part of a clever sales strategy to drive people to buy high end Apple products? Maybe all the PC OEMs bought AAPL at the top and now they are looking to sell at b/e and buy their own stock at low, after which they decide to fix their products? Makes complete sense.

    (following is largely repeat of what I've said before)

    I was going to make a thread complaining about those razor sharp edges on the Apple tablets and laptops but then came across some good folks that pointed out that people noticing these are "holding it wrong" and then linked to Apple ergonomics web site. Hard to argue with that. At this point I'm waiting for a 13" laptop that has one ssd in the tablet/display part and auto-sync to a second easy to swap ssd in the keyboard/large-battery part of it. So two computers for price of one, that seems like a deal. Will I actually *buy* such device? It completely depends on a) how eye straining the display is b) the end to end latency of various peripherals c) if it's 15"+ size device there should be ordering option for choosing between having a numpad or having a standard cursor keys + the usual ins,home,pgup - del,end,pgdn rows (and obviously f-keys need to act as f-keys unless I'm actually running some office or some other deal that requires some silly-keys).

  • What's with all the complaining about high DPI displays for Windows desktop?

    1990-1992 (shadowmask?) VGA CRT at 320x200 native vs LCD (doubled?). The idea mentioned in previous post is to present all graphics that weren't rendered at the double resolution through this type of re-arrangement.

    Unfortunately the source of the images didn't give exact details of the configuration used to take these shots so I'm guessing. 

  • What's with all the complaining about high DPI displays for Windows desktop?

    I think I underestimated the "tricky" business in the OP. Clearly, you'd have to somehow ensure that text gets rendered after or in a separate buffer or something, to avoid re-doubling its size. It's to be expected that some text would not be rendered in higher resolution depending on how it was done, so at best case you wind up with a mix of higher quality and lower quality rendering.

    However, I came across some interesting images that may suggest a solution to this problem - essentially, instead of rendering the bitmap graphics just "doubled up", they would be rendered in same manner as the first VGA CRT's from 1990-92 timeframe. Currently pixels are rectangular in flat panels, leading to a very "sharp" look. However that's not what you really want because the intention is to not see invidual pixels. Problem is, it's probably not enough to just double the pixel density to create this effect and simulating it may not be free. So perhaps changing how the pixels are arranged on the display itself to resemble old CRT's will help.

     

  • What's with all the complaining about high DPI displays for Windows desktop?

    http://www.hanselman.com/blog/LivingAHighDPIDesktopLifestyleCanBePainful.aspx

    I don't really understand what's the problem.

    If you do a nearest neighbour doubling of everything, it should result in 1:1 identical look as at the lower resolution if the panel DPI is doubled, right? If it doesn't then please educate me why not.

    Now assuming it does result in that 1:1 identical look, all that needs to be done is to replace all text rendering calls to ones that exactly double the resulting size (the tricky part).

    Now of course most people probably use non-integer-multiple scaling with high DPI but the reason for that has a lot to do with aspect ratios for desktop size monitors (to gain actual "practically usable" vertical space of 1200+ "practically usable pixels in terms of proportionally correct legacy apps " in 96 dpi). MS needs to get some balls here and remove all support for movie aspect ratios for Windows editions that are not intended for movie watching. This will force OEMs to start ordering 4:3 panels. 4096x3072 in 24" monitors can't come soon enough as old CRT's are becoming beyond repair.

    And that's a real improvement considering the actual "practically usable pixels in terms of proportionally correct legacy apps " resolution is then 2048x1536 (except with text being rendered by windows at double resolution where possible) - a resolution perfectly supported by all apps that matter (those made before silly resolutions and general foolishness started to plague the industry, like Battlefield 1942). A perfect transition for most people still using 1600x1200 and 2048x1536 CRT's.

     

    Now if someone can explain why I am wrong I'd be pleased to hear. Apple realized this is how to go at this problem long ago, and the OEM'┬Ęs only get on board if they get a strong message that anything else than doubling the resolution from standard CRT resolutions (2048x1536) is complete non-sense that no one even wanted except those who can't afford to buy a TV for watching those crap movies. They apparently didn't get the message that it's the series that were good and not the movies (Well except for Voyage Home and Generations, I loved those).

     

  • Can VS create a visual graph of typical high performance code?

    I confused the point it seems.

    What I meant had nothing to do with profiling or *getting* high performance. It was just an observation based on having seen a bunch of open source code. I'm not saying that performance has anything to do with this but my feel from having seen a lot of code is that those projects that have good performance tend to come in C-style code and have either "meaty/large" method bodies or many smaller methods in single file. Perhaps there's no classes being used at all (C) - so my point was - what good is "Code map" if it can't create a map of the kind of code that really needs some sort of map? (eg. 200k .c file with no classes). IDA Pro can create such maps with assembly code, so creating such maps with source should be no problem.

     

  • Track file Read operations ​programmati​cally?

    FSUTIL behavior set DisableLastAccess 0

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem]
    "NtfsDisableLastAccessUpdate"=dword:00000000  

    Beside the crash risky hooking/installing driver approaches (some sort of user mode driver might work here too), ETW is the only thing I know about. And I'm not sure if it's possible without elevation. And all ETW using apps I've tried in past have a nasty feature of spinning up all disks in the system when you stop tracing, atleast on vista/win7.

    show providers (IDK if this helps or not as I didn't play around with it) at cmd prompt: logman query providers

    (Microsoft-Windows-Kernel-File            {EDD08927-9CC4-4E65-B970-C2560FB5C289} sounds promising)

    Couple articles I've saved from the web locally... One has a sample prog.

    How to consume ETW events from C# - Daniel Vasquez Lopez's Blog - Site Home - MSDN Blogs

    Windows high speed logging ETW in C#-_NET using System_Diagnostics_Tracing_EventSource - Vance Morrison's Weblog - Site Home - MSDN Blogs

    An End-To-End ETW Tracing Example EventSource and TraceEvent - Vance Morrison's Weblog - Site Home - MSDN Blogs

     

    http://blog.mozilla.org/sfink/2010/11/03/etw-part-4-collection/

     

  • Satya ' self down...

    Perhaps what should be taught in OS design is how to make low latency audio+midi+gpio mixing desk with plenty of routing abilities and interfacing from high level languages. The rest of the OS services and stuff can then be built on top of that foundation, scheduled such that it won't interfere with the mixing desks performance characteristics.

     

  • What libs you use in .NET for disk bounded logs+random access ​w/prefetchi​ng and statistics on streams?

    Couple links I'm looking at:

    Streaming statistics:

    Count-min sketch presentation by MS research dude https://www.youtube.com/watch?v=OOZC4KCErN0

    https://sites.google.com/site/countminsketch/code

    http://research.neustar.biz/2013/09/16/sketch-of-the-day-frugal-streaming/

    https://github.com/tdunning/t-digest

    https://gist.github.com/debasishg/8172796 (collection of links on streaming statistics)

    Storage/persistence:

    https://github.com/discretelogics/TeaFiles.Net

    https://github.com/ilyalukyanov/Lightning.NET

    http://stsdb.codeplex.com/

    https://github.com/kjk/volante

     

  • What libs you use in .NET for disk bounded logs+random access ​w/prefetchi​ng and statistics on streams?

    What libs you use in .NET for disk bounded logs with api for random access with configurable prefetching and statistics on streams?

    It would be ideal if using same API I could have the values either stored as files with optional auto-defragmenting or if the values are mostly under eg <4KB then the store can be also configured to be a single file. For the random-access API this selection should be transparent.

     

     

  • Can VS create a visual graph of typical high performance code?

    As far as I understand, IDA creates such map/graph without running the code. And there's no classes involved, obviously. I'm talking more of a "possible method call code map" than a "class diagram".

    I wasn't talking about graphing what would happen if it was executed, rather, just what are the possible calls/jumps and what macros could potentially run in that method. Of course if there's some dynamic code it's not going to show anything for methods involving such.

    The purpose of this would be to get quick overview of those "monster size code files" vs just scrolling through it and trying to memorize what calls what. And I mentioned high performance because whenever I look at some high performance code its more likely imho to come in "enough methods to make scrolling a pain". Of course if it did come in many classes you'd still be none the wiser, infact you'd have to click around more to find out what's going on. That's why the code map feature.

    What I'm just saying is that I tried that Ultimate version and it didn't work as I would have expected. It only created a map of the class relations. Boring! Such tools were around 10 years ago. There's a lot of legacy high perf code that has years of investments in it and doesn't make sense to rewrite from scratch. Tools should map out that kind of code, and that means doing what I specified. (Then if one did choose to start refactoring such code, the visual map of calls might help too)

     

    Replace the assembly with C/C# (and somehow use the method scope and names to create some sort of enclosing boxes with the method name acting as overview of what's going on in the box):