@ray: you are correct that time-travel debugging has a long history of previous work. In a paper we published last year we review work on the topic going as far back as 1988. A major challenge in using previously described systems is the large performance impacts they can have, frequently in the range of a 2x-10x slowdown, which makes them impractical to use on a day-to-day basis. The major advance we made is lowering the overhead of running a program in time-travel mode to the point where it is negligible from a developers perspective. As described in the video, and in the paper I linked, we are able to fundamentally reduce the overhead costs by working at a higher level in the system (JS/HTML application vs. entire C++/Browser/OS stack) and leveraging existing features in the managed JS/HTML runtime (type-safety and GC).
Your point about providing an API for the enabling technologies is an excellent one. As James mentions in the video we are excited about the possibilities of this new interrogative layer of virtualization and, hopefully, we will have more to say about this soon.
@md70: Hi and thanks for the questions. We are working with some great people on the product teams to figure out how to turn the experimental system into something that we can make public but, at this point, we don't have a concrete timeline for when this could happen.
On the technical question, you are right to observe that we don't want to re-execute some calls that have side-effects. For example resending an XHR request which may be unexpected at the server. To handle this we do two things. First is that during the original execution we remember the result of these side-effecting operations. In the case of XHR what the responseText, responseType, etc. values are. Then when we replay the program and reach the send call we transform it into a nop (i.e., we don't actually send the request). Later when the program reads, say the responseText property, we return the value that was recorded in the original program execution.