You mean apps in the mobile app store like "Google Maps" or "Gmail" or the "Google app"?
Car Manufacturers (like many established industries) are drowning in ISO-standards, regulatory standards, efficiency standards, fuel standards and all the rest. Software is the weird one, where few standards exist, and the penalties for not complying with the few that do exist are basically nil.
If a software engineer actually said "you car engineers don't even know about standards" to a car engineer in real life, the car engineer would laugh, cry, and then bludgeon the software engineer to death with an ISO 691:2005 compliant wrench.
While it makes for a cute slide, this isn't so much a definition of a "digital disruption", as a definition of a "franchise", which have existed since at least 1850.
For example, McDonalds is the world's largest chain of hamburger restaurants, owns 35000 stores, is worth $108bn, but operates only 6 restaurants in the US outright, (~5000 worldwide), since about 1955 (based on the model by Qunicy from 1920).
Ford - the world's biggest automobile manufacturer for a time - started in 1914 with Henry Ford pushing all of his sales via franchises. The overwhelming majority of car dealerships in the US today aren't owned by the automobile company on the logo.
So while it's a cute slide, and it's fun for Califonians to pretend they invented modern capitalism in the past couple of years in Silicon Valley all by themselves, it's just not true. If this is the definition of "digital disruption", then it started in 1850 with Issac Singer and his sewing machines, a hundred and forty years before the web was invented.
@blowdart: Yup. I remember there was two secure email service provider that, one of them sued by DHS(or is it DOJ?) and goes bankruppted, while the other one (Lavabit) decided to shutdown their business instead of handling the bending themselves to the request (fine of USD 5,000 per day is not easy to withstand). And then there is a third one, Slient Circle, that is not sued by any of the US departments, but decided to shutdown shortly after Lavabit incident fearing they'll be sued too some day.
There's a lot wrong there.
Lavabit closed itself down (it wasn't sued or bankrupted) after DOJ served them with a warrant to gain access to the emails of a suspected criminal, subject to an ordinary Fourth-Amendment compatible warrant signed by an ordinary Article III judge after showing probable cause as part of an ordinary criminal investigation.
The fine was imposed by the court for contempt of court and obstruction of justice because Lavabit refused to comply with the warrant.
*rofl* Yea. CISA passed and has pretty much the same effect, except instead of banning encryption it forces companies to hand over data to the DHS. Then there's the push for "backdoored encryption". The US is not a bastion of internet freedom in any way, shape or form.
If by "forces companies to hand over data to the DHS" you mean "companies can now voluntarily (i.e. can also choose not to) share threat-intelligence signatures (i.e. what Digital Crimes Unit at Microsoft is doing) without fear of being sent to jail for doing so".
Or in other words, the US Government made their own MAPP program, and US companies can sign up for it if they want to without implicitly violating the SCA.
For all of the breathless hysteria by many in the media, neither the UK nor the US government are in any danger of banning obviously unbreakable encryption like SSL/HTTPS. It's an absurdist strawman.
Oct 23, 2015 at 9:39AM
What's not to get? Windows Update now works like every other major product in existence, retiring from being the only major piece of software that actively maintained large numbers of major versions of its products when newer versions of those products exist.
"Fast path" is like the "beta channel" of every other major product. "Slow path" is like the "stable channel" of every other product and the long-term support release is like the "long-term stable" release builds that some products like Firefox do. There's also a dev-branch and an alpha-branch, although those branches are only available to Microsoft employees.
I get that you probably don't like this change. But Windows was the outlier here. Microsoft is just making Windows' distribution look like the distribution of every other single piece of software that you use; i.e. you get to choose between beta-branch or stable-branch, and automatic updates mean you're never several major versions behind the tip of whichever tree you choose.
The short answer is that the indexer re-indexes files based on one of two criteria:
* Someone manually notified the Windows Search Service directly via a Notification
* The search indexer periodically re-indexes the file if the content has changed.
Because every product has a priority list of features that they want to implement, and this feature was lower than other features in Edge. Every feature implemented on any given product is an opportunity cost that means another feature wouldn't have been implemented.
In this case it seems justified. The feature relies on hardware accelerated video decoding onto a sphere projection, which is something Chrome explicitly designed into their browser to support YouTube/360' videos like this one.
The quads that are projected to the sphere are available here: https://video-sin1-1.xx.fbcdn.net/hvideo-xfl1/v/t42.1790-2/11969984_870751996307656_847599406_n.mp4?efg=eyJ2ZW5jb2RlX3RhZyI6InFmXzc2OHdfY3JmXzIzX21haW5fNS4xX3AxMF9zZCJ9&oh=af464c442e2f0e17fdb781ab2e9c3fba&oe=56055F1E
In this particular case, the code required to provide an app-compat shim so that browsers without HW acceleration can do this (1) at all and (2) fast enough that the video remains watchable and (3) embedded on Facebook means I think the StarWars team were justified in their decision to only go with browsers that support the mini-feature.
Sep 22, 2015 at 4:32AM
Lets say you have 2 people using a laptop with both having connections to two ISP's. Now you want to create a multi-homed load-balanced peer to peer connection between those 2 people. (assume they are using some arcane protocol that only supports ipv4 and no DNS at all)
For that there are exist some solutions already but they don't support the load balancing. Not just VPN but also things to support old network protocols for games and then there's IPv6 tunnel providers like sixxs etc. Add some customized gateway to handle the load balancing and that should do it.
However I need stuff that doesn't require installing stuff on the server, while having the load-balancing for the client such that the server doesn't lose state if one of my ISP drops connection.
Now maybe if the server supports ipv6 connections you could get load-balancing through sixxs but in their current configuration I don't think that's supported.
There's an elegant solution of how to do this using just IPv4 and no DNS:
Since you have IPv4, you can send TCP and UDP packets around the Internet (you can send other IPv4 packet types, but they get throttled on the backbone), so what you really want is a protocol that allows you to tunnel some way of resolving a computer address to you. Obviously there might be more than one person using the protocol, so let's assign each person an identifier (an "address" if you will) and we need to build a protocol over IPv4 that allows a person that knows that "address" to reach one of your load-balanced servers.
This is a really simple protocol, so although we could use TCP, let's use UDP for argument's sake.
The only real way here is that "Computer A" needs to find "Computer B" over the cloud, so we'll need a fixed server that can relay the location information, say "Computer N" with some globally fixed IPv4 address. We can have a bunch of fail-overs to avoid introducing a global point of failure into the network.
Now in our protocol, we can send a UDP packet from "Computer A" to "Computer N" asking for the "Address" of "Computer B". So far so good. But let's make it scale.
To do this, let's have a mesh of "Computer N"s, each in the cloud and geographically distributed. They'll use some protocol for ensuring they keep their records in sync, and we'll dynamically choose which "Computer N" to reach out to based on the local address of "Computer A". We'll do this by having a secondary bootstrap protocol when your computer boots so that your ISP or router tells you where the nearest "Computer N" is.
If we're managing a bunch of "Computer B"s - let's say some in Europe and some in North America for arguments' sake - we can geobalance the connections too. "Computer B"s advertise as the resolving end-point for the unique identifier for Computer N's in Europe, and Computer B's in North America advertise as the resolving end-point for the unique identifier for Computer Ns in North America.
Although you asked mainly about IPv4, we could even extend the protocol (bearing in mind that UDP/IPv6 looks quite different to UDP/IPv4). To do this, we say that when Computer A requests the address of Computer B from Computer N, Computer N returns a bunch of addresses, and some of those addresses can be IPv6 addresses. If the client on Computer A can handle IPv6, it can resolve the unique address to an IPv6 address; otherwise it will resolve to an IPv4 address.
This gets us almost all of the way there. We've got ISP-independence, so if one ISP goes down you don't care. We've got load-balancing (hell, even geo-balancing) of servers, and the client doesn't need to know the address of the server. We don't even need to know of the address of the N-servers either, so long as we build this "initial bootstrap protocol" and get ISPs to use it. The bootstrap protocol could even be used to trivially implement gateways and proxies for users behind corporate networks.
The only major question left is which port to use for the UDP packets, bearing in mind we need to be really careful because we need to avoid firewalls blocking our new protocol and killing it. I did a couple of tests, and it turns out firewalls seem to let UDP port 53 through, and since we're not using it for DNS, it's a good choice of port for tunneling "name -> address" information through. To avoid confusing routers, we can even "smuggle" the data inside real DNS packets so that the data doesn't get dropped by deep-packet-inspection routers or firewalls.
To recap: we need to send UDP over port 53 to some server whose IPv4 address we got from some bootstrap protocol, over which we can resolve a globally unique "name" into one or more IPv4 and IPv6 addresses that we can use to connect to the servers in order to avoid a single-point-of-failure. If we have geo-disparate servers, we can get geo-local results from the "name server" by publishing local servers first on the "name server", and getting clients to resolve requests via their nearest "name server" (and hence resolve to the closest server address).
Huzzah! We solved the problem without even using DNS! \o/