Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Discussions

androidi androidi
  • Windows 10 - The next chapter

    @Craig_Matthews: Is the minimum size independent of the dpi configuration?

    I dl'd the new build though having seen some comments about the start menu I'm not really sure if I'll bother installing it - I use the start menu a lot and if it's usability is objectively, measurably, worse* than in 7, then going to 10 is more about having to due to either lack of support or some hardware requirement. (*I went through the details of this few months ago right here, if MS doesn't "listen" to the forum as they claim to, well too bad, because if they want 7 users to upgrade, better listen to them than those already on 8.x)

  • Windows 10 for Windows Phones

    Umm. So Lumia 720 gets this or not? And if it does, why doesn't the newer 1020 get it...

    Sources:

     

    http://www.microsoft.com/en-gb/mobile/support/product/lumia720/softwareupdate/

    http://www.gsmarena.com/nokia_lumia_720-5321.php

    http://www.gsmarena.com/nokia_lumia_1020-5506.php

    According to those pages 1020 released after 720. Yet some media* is claiming 1020 is older and that's why it doesn't get it. Anyway this is slightly annoying given I just recently got 1020 for the camera (requirements being, slim not too large phone with a dedicated camera button and non-metallic shell) and it's annoyingly slow.

    http://mobilesyrup.com/2015/01/18/nokia-lumia-1020-wont-benefit-from-lumia-denim-camera-update/

     

  • How would you defeat this argument against open source?

    ("marked as spam" should be undoable, somehow clicked that oops)

    "Argue all you like. Several multi-billion dollar industries (including all major Graphics card vendors and Anti-virus vendors) with *-tons of lawyers disagree with you, and several major companies including both Microsoft and Steam do it to programs that run on their platforms as well."

    Is "Big'N lawyered up" generalizable to any company though? eg. Some very small company producing patches and such might have this sort of IP thing as a risk. You'd be probably correct to argue that if they wish to stay in business then they need to find ways to mitigate whatever risks and "put out fires" etc. I find that aspect of staying in business in face of competition problematic - what if your competition has lower standards and various other advantages. I find it concerning that there's even a possibility of a situation where a lot of risks were mitigated but you still ended up asking "Whatever it takes to stay in business?". If under pressure that could lead many to "slippery slope" situation, because after all that risk mitigation work is implemented, a lot has been probably invested already. One example below.

    Had Google not acquired Youtube, would Youtube gotten away with using a model that's not all that distinct from something like Megaupload and the like. The only difference I see between those two is that Megaupload was claimed (I haven't verified) to have been more public about and possibly also incentivizing either a certain category of uploads or activities that were likely to result in such uploads (imagined example: targeted purchasing of advertising in places likely to have copyright material uploaders and then offering some sort of uploader benefits). With Youtube only emails found later have suggested that Youtube founders anticipated that certain category of videos (stuff copyrighted by someone else than uploader) would be critical for popularity (and probably exit strategy if they had one), and they weren't taking measures in those initial stages of growth that they could have taken to respect the copyright holders vs limiting their own growth/succesfull exit chances. (Maybe the emails didn't put it in such specific manner but either way, at one point things can be argued to have become obvious despite whatever claims youtube founders make)

    I'm sure a lot of people had similar ideas that many business do, the difference in success seems to in some/large part derive from ability to postpone the seemingly inevitable in hopes that during that time a solution will be found to postpone it infinitely.

  • How would you defeat this argument against open source?

    That point about some license restricting ability to mod the exes is certainly something that might need to be reworked to "you are allowed to do changes to our executables as long as ... " - which is similar to Detours where you aren't allowed to modify it in such a way that using it doesn't leave "this was detoured" sign on the "front door" of the executable. The difference would be that instead of being limited to using something like detours, you could get back the source and recompile it. At this point there could be a mechanism that was similar to detours - you'd eg. add the modded .exe/dll back to the folder with different name like blah.mod.exe or some other obvious way to identify that a mod is in place. Then provided the mod cert was loaded pre-boot, the OS would hook in the changed parts or whatever makes sense. Something similar to the appcompat patch system but with C#-like language and readable source for making mods.

    I would much prefer this idea to the Linux "open source if you have ample of time and dedication to work on this particular project"-model, where, as one commenter said above, doing some minor mod into a major project is no minor task.

    Yeah sure you can do stuff with Detours if you really know what you're doing (or as with the linux build setup, willing to spend ample of time to do possibly one line change) with some help from good debugger or IDA but to me thats a) tedious (I've used Detours few times too) b) unless you're really an expert it's hard to be sure whether you're doing more harm than good - largely due to bigger learning curve and more "traps for young players". And overall it's just nowhere as simple as using Reflector and making some mods then recompiling. eg. I "ported" a 32 bit C# app (some pinvokes and libs needed fixes) to 64 bit. Last I checked, bringing some 16 bit Windows functionality to 32 bit Windows took a little more effort. (Perhaps 32 to 64 is easier - if you have the source that is)

     

    Now there's that good argument that you can hire someone to do changes you want if you don't have time to learn to do them etc. But then there's the question, what happens when OS updates get applied and you had some more complicated mods contracted. Now you're going to have to pay to keep them up to date and since you don't know anything about it, it's a bit uneasy position to get into. For a company depending on some of the mods, it would still make sense but I'm more interested in usability fixes and such and they aren't so critical that I'd want to "open the tap" so to speak. (Thinking about this I also find the need to know if an OS update to modded executable is likely to affect the modded functionality - some sort of modern language might allow the compiler to figure that out better than with C++)

    The model suggested in the op solves issues with both closed and open source if the pov taken is the practical improvement/modding or fixing of software without dedicating significant resources to a particular project. I have plenty of my own projects so I simply don't want to spend more than few minutes to get something to a state where I can actually get to fixing or modifying the functionality.

     

  • How would you defeat this argument against open source?

    In response to

    > "Attackers aren't going to politely wait for Microsoft to fix issues like this, and Microsoft won't fix issues like this unless they are pressed to. And this brings up the glaring flaw with closed source products. If a third party flagged an issue in an open source product, any user that is concerned enough could potentially fix it or patch their own systems themselves. With closed source, we have to wring our hands and wait for someone at Microsoft to care enough to fix it."

     

    Argument (assumes the new OS shipped binaries that could be decompiled into form that can be recompiled again trivially):

    Lets say I am a consumer having routers running Linux and even if I knew about developing in some manner, I wouldn't necessarily have time or interest to start fixing bugs in gear running platforms that might require a complete recompilation and setting up a remote-build system and what else.

    Contrast this C/C++/open source model to a model where operating system and everything was written in eg. variations of C# called M# that was used to develop a real operating system.

    In this managed language model, if my router or phone etc has a bug, I can download the affected binary from the router and get back source code that's readable enough that I could actually make larger changes to it and send it back to the router. Yes. You could do this with IDA pro but having actually tried it, I can tell you it's nowhere as easy as with C#.

    By "readable enough" I mean that with C# (and probably Java etc) you can decompile binary, get back good enough source that you can in few minutes be recompiling it again. The only problem would be if the OS used signed executables and would not allow replacing the executables with ones that you self-signed. So while waiting for official patch, you'd have to set the OS into a mode that accepts self signed executables. This certificate for self-signing could be put into the hardware cert store through a firmware interface pre-boot. This way the entire system would stay secure despite using self-signed modded OS dll's. (edit : you'd also need a way to select whether you want OS update to overwrite your mod or not)

     

  • No wonder Roslyn is migrating from codeplex to github

    Actually now that I thought about this. Perhaps the most sensible pre-HDR monitor solution is to have a dedicated keyboard key that controls the monitor calibration. Using any driver adjustments is unacceptable because the consumer GPUs don't have a enough bits or smart enough protocol to avoid banding issues and for LCD's backlight brightness is a separate variable.

    So the best way to address this issue is to create a Windows 10 certification for pc monitors and PC keyboards that requires that the LED or backlight intensity can be controlled through the driver along with other parameters, such that user can toggle display between paper-white on a Windows 10 keyboard with such dedicated display intensity profile toggle key (eg. 3 way switch key, such that you have print-calibrated-, web- , tv-like-brightness -where brightness means the backlight intensity/led voltage).

    Issues with this solution - might decrease demand for true HDR (if there is such demand, I don't think it has been drummed up yet enough). Also people might not know what profile they are using if the brightness differences have been made small through traditional adjustment because of not being aware of this feature. Current monitors already have a profile toggle but the interface varies from monitor to monitor and I doubt many people reach for their monitors all the time - it's just not as convenient as some volume button on a keyboard. Clearly a "volume" button for the monitor light intensity is also needed.

  • No wonder Roslyn is migrating from codeplex to github

    Actually I figured the answer to "whose idea was it". Probably something to do with desktop print publishing. But if 99.99%* of PC users used PC's for entertainment, why would couple desktop publishing users dictate that everyone had to calibrate monitors to assume 255,255,255 was tabloid-white? (* Probably even closer to 100%, since PC's to me meant C64, Amiga and DOS gaming back in '94 - the print press use may have only been a decent % on Mac's and custom print industry systems AFAIK - though print industry sources tell me that back in '94 they didn't use PC's - so perhaps this was just Microsoft stupidity)

    A more sensible scheme would have been to have eg 23-200 for paper intensity, 200-255 for very intense/bright objects and 0-23 for pitch black stuff, blacker than black ink in a normally lit room. And of course invidual monitor adjustments for each range, so if I didn't want "sun strength white" for 255, I could tone it down to be just slightly more intense than what print-brightness calibration demanded as max brightness for 200,200,200.

    Of course the API's should have been designed such that you had to clearly state what was the intent of the draw operation, so that those regular desktop apps would be only using 23-200 range.

     

  • No wonder Roslyn is migrating from codeplex to github

    1) take a random project in codeplex and click History, then download a zip of particular version from history (or even just the latest from the history page). Repeat same process on github. Repeatedly I find that doing this in codeplex can take minutes or give "We're sorry, but an error has occurred." "Error Reference #0c451015-c296-4b04-9bfe-50a460eaba4e". Github is very quick atleast on normal size projects.

    2) Information density is lower and the site is white background all over, which is annoying if you prefer dark themes. Can't wait for some broken implementation of OLED HDR pc monitors. All these white web sites probably look like staring at the sun. Whose stupid idea was it that 255,255,255 RGB on PC should represent anything but photo of the sun or some other super bright object? With the my first CRT's this wasn't much of a problem as they had physical knobs to quickly tweak the settings. Also in DOS, the text was not white on black. Because people back then had a clue. Clue flew out the window when Windows came around (notepad, wordpad etc had 255,255,255 white - so if your monitor is adjusted to paper white, all the games, photos etc look dull unless monitor is readjusted). I hope Windows with HDR monitors will be smarter and not allow drawing text using any api on a HDR max white background.

     

    Of course don't expect to have any HDR consumer monitors unless MS puts 1+1<>255 together and figures that black text on white where white is whatever is the max that the display adapter can handle is plain dumb.

  • C# 7.0 may bring some M# goodness

    Sort of related talk.

    https://www.youtube.com/watch?v=gFP5YcvQsKM (Too Many Cooks - Exploiting the Internet-of-TR-069-Things)

     

  • Amazing pair programming (golang, vim, git, appengine, Andrew Gerrand)

    I was missing intellisense while watching that so I decided to skip to the next suggested video, looks really interesting btw.

    https://www.youtube.com/watch?v=5BrdX7VdOr0 (Thunderstrike: EFI bootkits for Apple MacBooks)