The short answer is that the indexer re-indexes files based on one of two criteria:
* Someone manually notified the Windows Search Service directly via a Notification
* The search indexer periodically re-indexes the file if the content has changed.
The short answer is that the indexer re-indexes files based on one of two criteria:
Because every product has a priority list of features that they want to implement, and this feature was lower than other features in Edge. Every feature implemented on any given product is an opportunity cost that means another feature wouldn't have been implemented.
In this case it seems justified. The feature relies on hardware accelerated video decoding onto a sphere projection, which is something Chrome explicitly designed into their browser to support YouTube/360' videos like this one.
The quads that are projected to the sphere are available here: https://video-sin1-1.xx.fbcdn.net/hvideo-xfl1/v/t42.1790-2/11969984_870751996307656_847599406_n.mp4?efg=eyJ2ZW5jb2RlX3RhZyI6InFmXzc2OHdfY3JmXzIzX21haW5fNS4xX3AxMF9zZCJ9&oh=af464c442e2f0e17fdb781ab2e9c3fba&oe=56055F1E
In this particular case, the code required to provide an app-compat shim so that browsers without HW acceleration can do this (1) at all and (2) fast enough that the video remains watchable and (3) embedded on Facebook means I think the StarWars team were justified in their decision to only go with browsers that support the mini-feature.
Sep 22, 2015 at 4:32AM
Lets say you have 2 people using a laptop with both having connections to two ISP's. Now you want to create a multi-homed load-balanced peer to peer connection between those 2 people. (assume they are using some arcane protocol that only supports ipv4 and no DNS at all)
For that there are exist some solutions already but they don't support the load balancing. Not just VPN but also things to support old network protocols for games and then there's IPv6 tunnel providers like sixxs etc. Add some customized gateway to handle the load balancing and that should do it.
However I need stuff that doesn't require installing stuff on the server, while having the load-balancing for the client such that the server doesn't lose state if one of my ISP drops connection.
Now maybe if the server supports ipv6 connections you could get load-balancing through sixxs but in their current configuration I don't think that's supported.
There's an elegant solution of how to do this using just IPv4 and no DNS:
Since you have IPv4, you can send TCP and UDP packets around the Internet (you can send other IPv4 packet types, but they get throttled on the backbone), so what you really want is a protocol that allows you to tunnel some way of resolving a computer address to you. Obviously there might be more than one person using the protocol, so let's assign each person an identifier (an "address" if you will) and we need to build a protocol over IPv4 that allows a person that knows that "address" to reach one of your load-balanced servers.
This is a really simple protocol, so although we could use TCP, let's use UDP for argument's sake.
The only real way here is that "Computer A" needs to find "Computer B" over the cloud, so we'll need a fixed server that can relay the location information, say "Computer N" with some globally fixed IPv4 address. We can have a bunch of fail-overs to avoid introducing a global point of failure into the network.
Now in our protocol, we can send a UDP packet from "Computer A" to "Computer N" asking for the "Address" of "Computer B". So far so good. But let's make it scale.
To do this, let's have a mesh of "Computer N"s, each in the cloud and geographically distributed. They'll use some protocol for ensuring they keep their records in sync, and we'll dynamically choose which "Computer N" to reach out to based on the local address of "Computer A". We'll do this by having a secondary bootstrap protocol when your computer boots so that your ISP or router tells you where the nearest "Computer N" is.
If we're managing a bunch of "Computer B"s - let's say some in Europe and some in North America for arguments' sake - we can geobalance the connections too. "Computer B"s advertise as the resolving end-point for the unique identifier for Computer N's in Europe, and Computer B's in North America advertise as the resolving end-point for the unique identifier for Computer Ns in North America.
Although you asked mainly about IPv4, we could even extend the protocol (bearing in mind that UDP/IPv6 looks quite different to UDP/IPv4). To do this, we say that when Computer A requests the address of Computer B from Computer N, Computer N returns a bunch of addresses, and some of those addresses can be IPv6 addresses. If the client on Computer A can handle IPv6, it can resolve the unique address to an IPv6 address; otherwise it will resolve to an IPv4 address.
This gets us almost all of the way there. We've got ISP-independence, so if one ISP goes down you don't care. We've got load-balancing (hell, even geo-balancing) of servers, and the client doesn't need to know the address of the server. We don't even need to know of the address of the N-servers either, so long as we build this "initial bootstrap protocol" and get ISPs to use it. The bootstrap protocol could even be used to trivially implement gateways and proxies for users behind corporate networks.
The only major question left is which port to use for the UDP packets, bearing in mind we need to be really careful because we need to avoid firewalls blocking our new protocol and killing it. I did a couple of tests, and it turns out firewalls seem to let UDP port 53 through, and since we're not using it for DNS, it's a good choice of port for tunneling "name -> address" information through. To avoid confusing routers, we can even "smuggle" the data inside real DNS packets so that the data doesn't get dropped by deep-packet-inspection routers or firewalls.
To recap: we need to send UDP over port 53 to some server whose IPv4 address we got from some bootstrap protocol, over which we can resolve a globally unique "name" into one or more IPv4 and IPv6 addresses that we can use to connect to the servers in order to avoid a single-point-of-failure. If we have geo-disparate servers, we can get geo-local results from the "name server" by publishing local servers first on the "name server", and getting clients to resolve requests via their nearest "name server" (and hence resolve to the closest server address).
Huzzah! We solved the problem without even using DNS! \o/
Sep 21, 2015 at 6:19PM
@cheong: That's actually a lot like the VPN based solution suggested. Have redundant exit node(s) in a cloud and have a multi-homed load balancing VPN client through 2+ ISP's connect to the exit. Then automatically switch the exit nodes in the cloud based on the target attempted to reach to avoid latency. The cloud VPN exit will have the same IP assigned to all the exit nodes.. somehow - though perhaps that is not really necessary as long as the target IP will always go through the same IP even if you move around - the cloud becomes kind of an abstracting router infrastructure on top of the internet to facilitate the ability to keep the IP visible to apps and targets the same regardless of what ISP is used.
You're thinking at the wrong layer. All of the above is done well and easily by DNS, and is how big websites work. IP is for routing data across the Internet. By definition it's heavily tied to the route the packets need to travel across the Internet.
You can't have an ISP-agnostic IP address unless you're an ISP in your own right. You can have an ISP-agnostic DNS name that load-balances across multiple IP addresses distributed across multiple ISPs.
What you're basically suggesting is re-inventing DNS, but slower and without memorable names. Use DNS. It's already solved the problem you're trying to solve.
I suggest you go home and install Windows 7 before you have an aneurysm. It has all the features you want: No metro, you can turn Windows Updates off, and the software you write on your Windows-7 machine will also work out-of-the-box on Windows-8 and Windows-10 too.
With regards to the secdrv.sys stuff, I'm inclined to agree. Although the DRM driver is sketchy as hell, the bulletin that disabled it was entirely unrelated to secdrv - it was a bunch of bugs in Win32k's graphics engine.
Microsoft would have been fully within their rights, of course, to remove secdrv.sys from the kernel and add appcompat shims to the various games that it breaks. But disabling feature-applications based on "defense in depth" claims about drivers that haven't had a publicly reported vulnerability since 2007? That's a bit of a stretch.
And even if it DID come into the system, IE can be set to not run ActiveX, restrict what scripts can do, and as such generally block stuff you really don't want running just because some ad server delivered it to your browser. Even with these settings you see the content of most sites - which is why you visited them in the first place.
But IE can't be configured to block access to the ~2000 or so Win32k syscalls that have traditionally comprised over 90% of the attacks against the operating system.
It's a bit less easy to disable the Win32k.sys syscalls in IE so that your browser is immune to unknown (and hence unpatched) kernel bugs like Font bugs and GDI bugs in Windows.
Exploit kits exploit kernel bugs to escape the browser sandbox and install malware. Chrome is immune to the bugs in Win32k (>99% of historical kernel bugs) before they come out. Internet Explorer is not. That is a major difference in the security between the two.
Good luck building a comprehensive list of all the bad sites on the Internet in your hosts file and keeping it up to date. But I'd rather have a browser that's less exposed to full-system-compromise bugs in the first place.