It'd be interesting improvement if the Process Explorer could automatically use a low level driver to create hashes from the executable sections of processes and submit those to virustotal and if unknown then submit the executable sections, because why would the attacker drop anything on the disk if there's a hole on the server they can get back in with later? This needs to be done in a way that there is no way for the attacker to detect that this executable section hashing is being performed. This could be done by having the section hashing/capture run in the vPro/smbios and have a direct connection from there to a network or storage device that is not visible to Windows, so that this auditing can be completely isolated. Process Explorer can then run on another computer or vm and connect to the low level capturing to get a view of the processes and hashes that can't be manipulated, and the attacker won't be able to know they are running on the system.
Oct 12, 2013 at 1:03 AM
Can someone who's tried the Oculus Rift tell me how it works in the following sense:
If I have a PC monitor at hands distance, obviously my eyes are focusing at the distance of the monitor. Or if I watch the sky, they are focusing at "infinity".
One of the problems with these devices is that since the panel/lcd in the Rift is physically only ~inch away from your eyes, unless there is something that two ophthalmologists I asked claimed to be impossible, your eyes are focusing just a couple inch away (bit like looking at ones nose). And I suspect that if there was some "magical optics" to solve this, they might need invidual adjustment, possibly dedicated adjustments for both eyes, atleast in my case, unless one is supposed to wear corrective lenses also with them.
I read hints suggesting that in some of the military planes and windshield displays they have somehow solved this, allowing one to focus "at the sky" while somehow projecting the CGI to a surface like the windshield or helmet visor. Allowing to keep focus at infinity while being able to read the stuff that's being projected from/at close distance (but "at infinity", as far as focusing ones eyes goes).
I've long wanted such technology as if one could swap from "infinite focus" to "near focus" while keeping the CGI-text readable, this would solve some of the reading related 'lazy focus' issues that come from extensive focusing of eyes at certain distance as the eye focus mechanism isn't getting practised enough.
If Kinects big deal is end to end response time/latency, I think the above focusing issue is the classic big deal with wearable computer displays (not panel resolution which some might believe is an issue - visible spacing between pixels is another real issue but irrelevant unless the focus issue is also solved). Has Oculus Rift really solved that or have I been right to ignore this hype?
From what I've read about articles regarding low latency audio, since ACPI came along they pretty much always involve either swapping hardware or possibly toggling stuff off in BIOS and hoping the IRQ order changes (incase you need all the stuff you bought on a high end motherboard).
The best approach IMO would be a simple to use user option somewhere that allows selecting any one or two devices in the system and have the IRQ usage automatically re-arranged such that that those devices get a dedicated/non-shared IRQ that has also higher priority over the others.
eg. In my system I would manually prioritize audio, midi and custom hardware attached to any bus (eg. data acquisition or some homebrew hw on parallel port or GPIO pins) to have highest priority and network (eg. usb wifi) after that, the other devices can wait until those are done. However, if gaming I might want to prioritize GPU and input devices first, sound second (bigger buffer). (eg. in TMNF the network latency and sound are irrelevant, it's all about reaction time to what you see)
This could be limited to the Pro/Ultimate or similar SKU to avoid big HW vendors going back to assuming that they have dedicated IRQ for whatever consumer hardware they put out, and MS could only offer WHQL certification for hardware that worked with IRQ sharing for normal use cases, leaving user the choice which audio device on the system they want to prioritize highest.
[15:00] - Search no longer showing auto-suggest.
That answer was bit of half-way IMO.
Both you said that it *used to work*. And then some obscure setting that controls it which you didn't know about has it (the partial match /auto-suggest) turned off.
So the obvious question is, who turned the setting off if you guys didn't know about it? This is obviously major user experience bug (unexpected changes in user experience that no automatic testing will ever catch in QA), almost the level of the issues in Vista, except in Vista there was no way to fix them. (and the Windows 7 fix that MS applied makes the explorer slow down little by little until it's unusably slow).
Anyway I'm not upgrading from Windows 7 too soon since after all these years I have finally found fixes to all the show stopping issues in Windows 7 (Aero frame rate perma-drop to 15 FPS and explorer slow down).
That DJ joke at 37:20 in the binding delay context was funny.
But what is going on in VS2010 and VS2012 that when you scroll the code using the scroll bar, it stops updating momentarily if you scroll it too fast. That makes the products feel inferior to VS2008, nevermind the fact that if you don't have everything on SSD that VS2010/2012 access, they're going to start up slower than VS2008 from a HDD.
These two issues are key reasons why I have not bothered to learn WPF after finding that they are more rule than exception in WPF apps compared to previous technologies. IMO it's not worth bothering with WPF until the issues which makes your product feel inferior within timeframe of the users first impression are solved, if they can be.
If MS hasn't been able to solve such issues in their flagship product it just cements my belief that WPF & .NET have still unsolved fundamental perf issues and product made with them cannot compete with same product done in Apple's development technologies when it comes to the critical first impression.
To everyones dismay, it would seem the long cold launch time issue has migrated to WinRT. Even on a high end desktop launching stuff from SSD, the Metro app startup times are unacceptable and professional journalists have pointed this out as a major issue. The result of this is that I haven't bothered to look at learning WinRT either until there's some proof that MS can solve this for the scenario where user installs an application and runs it for the first time, the "first impression run". The target for this should be <300 ms rather than 10 seconds that the journalists quoted. I can't remember any application in my 486 DOS days that took even 1 second to start from HDD! And those apps were much more complex than anything I've seen Microsoft ship with Windows 8. In 300 ms you can download and execute some darn complex DOS apps that fit on a floppy or two and haven't been surpassed to date. Example: Elite 3 game.
There is a technical solution to this though, instead of downloading Modern apps and cold running them, download a delta compressed memory image where the base for the delta is a blank running Modern/NET app and start executing it progessively during download. Just NGEN is not enough when it comes to WPF bloat.
The "bound scroll bar" performance issues should be solvable with caching on background thread if there's no other solution, there's no technical reason why "scratching" the VS2010,2012 code editor scroll bars should have any dips below 60 FPS.
@Charles: I don't want to name anything specific, just that both the clang talk and the questions during the panels were talking about many other things that have impact*. I have to add that you have had good bit of functional programming and STL visibility here, and things like IntelliTrace and DebuggerCanvas are exciting but I just wish there were more of this non-language stuff thrown around, especially if it's stuff that MS has advantage of doing because they could, if they wanted, take all parts of the end-to-end development experience further than what's the norm today.
I guess I just got spoiled with the language stars videos talking about language futures often quite often here, and when there's not as much futures talk around other things besides the language, the "law of rising expectations" kicked in and I was expecting to hear similar amount of hype around other things.
* By impact I mean, a lot of the things in GoingNative were things that maybe C# developers take for granted. So having those in C++ is exciting, but what would be exciting for C# developers? How about things that C/C++ is good at, or completely novel stuff thats only possible if you have exclusive access to modify language, libraries, IDE, debugger and OS to make some compelling feature happen? I don't know what would that be, but it would certainly excite C# guys like me.
I think there's a bit too much focus (atleast on marketing side) on the languages, there's so much other things, many of which were briefly mentioned in the GoingNative panel talk points/audience questions that seem to be getting next to no "marketing" while they are very much key to productivity. I see a lot of videos in past years with "language rock stars" but I'd like to hear more on those other things.
Also I think it would be good to balance all the other language/compiler (and speakers talking about very exciting stuff being done in other companies) soon with some stuff that is from Microsoft. Otherwise there's a risk that people will get a "done" feeling. By that I mean: What has happened to Notepad (the editor control that it uses)? Not a whole lot, atleast in terms of supporting unixy line endings which are mandatory for notepad because notepad is the default editor for .txt and .txt often have different line-endings. MS always has some compatibility reason. Well I say, you have WinSXS taking seemingly gigabytes already, so why not add a new version of the control (or CLR or whatever) that goes boldly where the previous version can't go due to compatiblity reason/existing adoption hindering it?
Interesting interview but would have been nice to have a little deeper questions:
If app consists of multiple sequential or parallel executables, so that eg. excel.exe starts excel2.exe and then excel.exe terminates and excel2 starts multiple different exes with their own windows and excel2.exe terminates... will this kind of thing work with this model? What if there's also some LPC or shared memory IPC between these before the termination?
If app uses CreateFile to open \\.\C: (hope i got that right) or a PhysicalDisk and in order to run needs to be able to write and read somewhere on the disk without going through the filesystem apis, will your security layer virtualize this or will the app fail to run?
How do you "install" app onto this sandbox? Lot of talk about lack of 3D/HW support but would have been many more interesting questions about how to handle things related to what eg. game installers do, such as "sony rootkit drm", would that rootkit drm game install fine even if it was just 2D non-accelerated game. Also, would this approach work to enable better compatibility with Windows 3 & 95/98 apps/games using old DX apis?
Getting old windows games and apps to run is oft more pain than dos games in dosbox. If MS were to productize this research, it could end up like the current app compat layer, which can require a bunch (too much) of fiddling just to find the app you want to run is not going to run since even if you put compat mode "XP", the broken stuff tends to stay broken unless it was specifically tested by people in MS.
I think this type of legacy compatibility thing may be better using a hybrid development model: paid core team developing the long term goal deliveries and then allow the community using the product develop their own minor fixes and improvements that could be easily patched (by users, so simply that no instructions are needed) into the product on need basis. eg. if I as user run appX, it will check for community made fixes for appX and allow me to install those in the sandboxing layer or something, ensuring longevity and broadening compatibility as time goes on even if MS stops active development on the sandbox. Just a thought...