@Gilbert Baron: It could be argued that if it's the government you're worried about, then you should implement as much of the encryption/protection etc as possible yourself because the company you're relying on to do it for you could be compelled to backdoor the OS/mitm network/change environment etc by the government. If the stuff was implemented inside your app, they'd have to hack the app itself and if you have top notch intrusion detection, that could be hard. So half of any security system is really about detecting attacks and layering the encryptions so that compromise of first layer doesn't allow getting to the 2nd layer.
edit: I got an error with cmake -G "Visual Studio 12" to make the solution files but turns out I had to run it from the VS dev prompt as VS environment stuff was needed.
One thing I'd be interested in seeing is a patch for checking what sort of perf Intel ADX extensions have. They should be available in some laptops shipping with Broadwell.
Another thing I'd like to see is some sort of faster decimal, where "some sort of" is defined as, whatever that's faster than decimal without drawbacks of using double or needing some sort of library that converts things to (u)longs and so forth.
In Windows 7, stopping the tracing in many if not all ETW based tools caused HDD's to spin up. Is this still the case with Win 10?
I used to like process monitor back when it didn't use ETW. Now it's a tool of last resort as it's so heavy - huge activity when you start it and then spins up the disks when you end it.
I'd find out myself* but the Start Menu in 10 previews so far is not back to 7's usability so at this point I don't see much reason to upgrade because it's not much of an upgrade if you take a hit in usability of the most used Windows feature. Another concern is the "free edition" - well lets just hope there's a "good value" edition as well, just with less legal copy and without the features intended to make the free edition somehow profitable.
* I have 10 in VM but the spin up problem does not occur in VM and you need a bunch of 3.5" hdd's to really notice it.
I don't like those (just like in task manager in W8.1) CPU % usage graphs which can't show the usage of the most utilized core invidually - eg. if a single physical core was maxed out. If you have % across all cores/cpu's then you have no idea if 50% means 25+25 (dual proc e8400 half loaded) or 100%.
Since I got tired of waiting these improvements, here's what I'd like to be doing in addition to what I'm doing now in 2013 (except for the color changes and direct interaction with the debugger):
1. move caret where you want the bp
2. press add breakpoint key
3. this changes the bg color of the line for the characters that follow until esc/arrow up/down or "; " is entered (inline debugger condition that looks just like regular code except talks to the debugger and doesn't end up in the code file but in meta data file)
4. Now if you want multiple conditions, just press return/enter after ; instead of space to create a new condition line entirely with the custom "inline debugger condition background color"
5. The code written here can be in a global project that is loaded all the time. So you can write custom debug tool code that's always available and since it's within the market, you don't need to use namespaces or classes. This debug tool project can be in any CLR language.
Advantages over the current solution:
1. to create a x==5 condition, there is a total of 6 key presses excluding the toggle BP key: "[BPKEY]x==5; " and then you're back to writing code normally. I suspect there's more to the shown approach (space,tab,arrow) - assuming you can use the new system completely without mouse.
2. Condition code can be as complex as you want with as it's compiled into the executable like regular code with a check whether the condition/debugger is active right now - the compiled conditions can be changed with E&C during runtime
3. Unlike regular code this code also gets access to intellitrace/debugger during runtime, by way of the auto-included debug-aids project referencing the VS stuff
4. Takes less vertical space - adds only new lines when multiple conditions/actions present at same spot - yet is visually obvious with the background coloring (my current solution isn't)
I implemented the above system into my projects sometime ago. What I don't have is ability to talk to debugger/intellitrace or the custom background color, so my custom bp code is kinda lost in the code (that's why I made it ALL CAPS like : d.BP(x==5); where d is the global "debugging stuff" class. To help with IntelliTrace I did a load of dummy "nop" methods: d.IT(object,...) that will cause IntelliTrace to show the variable content at that point - but this could be improved a lot by having a "sample global debug tools class" which shows what you can do by talking to debugger/intellitrace when condition is hit.
There's one (obvious) thing sorely missing from the tool but as you said, if it was implemented then it might become something the malware authors would anticipate. Now they might not bother.
Stronger approach would be to have MS ship Windows with some sort of rootkit-detection dongle that had eg. USB port with debug ability and a network or wifi for getting updates externally to the rootkit detection algos without going through the compromised system.
It'd be interesting improvement if the Process Explorer could automatically use a low level driver to create hashes from the executable sections of processes and submit those to virustotal and if unknown then submit the executable sections, because why would the attacker drop anything on the disk if there's a hole on the server they can get back in with later? This needs to be done in a way that there is no way for the attacker to detect that this executable section hashing is being performed. This could be done by having the section hashing/capture run in the vPro/smbios and have a direct connection from there to a network or storage device that is not visible to Windows, so that this auditing can be completely isolated. Process Explorer can then run on another computer or vm and connect to the low level capturing to get a view of the processes and hashes that can't be manipulated, and the attacker won't be able to know they are running on the system.
Oct 12, 2013 at 1:03 AM
Can someone who's tried the Oculus Rift tell me how it works in the following sense:
If I have a PC monitor at hands distance, obviously my eyes are focusing at the distance of the monitor. Or if I watch the sky, they are focusing at "infinity".
One of the problems with these devices is that since the panel/lcd in the Rift is physically only ~inch away from your eyes, unless there is something that two ophthalmologists I asked claimed to be impossible, your eyes are focusing just a couple inch away (bit like looking at ones nose). And I suspect that if there was some "magical optics" to solve this, they might need invidual adjustment, possibly dedicated adjustments for both eyes, atleast in my case, unless one is supposed to wear corrective lenses also with them.
I read hints suggesting that in some of the military planes and windshield displays they have somehow solved this, allowing one to focus "at the sky" while somehow projecting the CGI to a surface like the windshield or helmet visor. Allowing to keep focus at infinity while being able to read the stuff that's being projected from/at close distance (but "at infinity", as far as focusing ones eyes goes).
I've long wanted such technology as if one could swap from "infinite focus" to "near focus" while keeping the CGI-text readable, this would solve some of the reading related 'lazy focus' issues that come from extensive focusing of eyes at certain distance as the eye focus mechanism isn't getting practised enough.
If Kinects big deal is end to end response time/latency, I think the above focusing issue is the classic big deal with wearable computer displays (not panel resolution which some might believe is an issue - visible spacing between pixels is another real issue but irrelevant unless the focus issue is also solved). Has Oculus Rift really solved that or have I been right to ignore this hype?
From what I've read about articles regarding low latency audio, since ACPI came along they pretty much always involve either swapping hardware or possibly toggling stuff off in BIOS and hoping the IRQ order changes (incase you need all the stuff you bought on a high end motherboard).
The best approach IMO would be a simple to use user option somewhere that allows selecting any one or two devices in the system and have the IRQ usage automatically re-arranged such that that those devices get a dedicated/non-shared IRQ that has also higher priority over the others.
eg. In my system I would manually prioritize audio, midi and custom hardware attached to any bus (eg. data acquisition or some homebrew hw on parallel port or GPIO pins) to have highest priority and network (eg. usb wifi) after that, the other devices can wait until those are done. However, if gaming I might want to prioritize GPU and input devices first, sound second (bigger buffer). (eg. in TMNF the network latency and sound are irrelevant, it's all about reaction time to what you see)
This could be limited to the Pro/Ultimate or similar SKU to avoid big HW vendors going back to assuming that they have dedicated IRQ for whatever consumer hardware they put out, and MS could only offer WHQL certification for hardware that worked with IRQ sharing for normal use cases, leaving user the choice which audio device on the system they want to prioritize highest.