My definition of a "systems" programming language pretty much goes as this +- a couple nitpicky points:
A program whether statically compiled or dynamic memory wise is completely self-contained when targeting a certain platform of execution on the target OS of ANY patchlevel which says it maintains compatibility.
The resultant program must also play nicely in kernel space if required in which the host language both provides and has mechanisms and semantics to accommodate kernel module/driver style communication to enable sane and safe kernel citizenship.
BTW Charles I'm waiting for Niko Matsakis' Rust talk to be available if you would be so kind :)
He seems to be having fun again instead of needing to be so serious and boring and safe like in such a military grade C# implementation. So this seems waaaay more healthy for the old fella, to put some spring back in his step.
What I'd like to see is similar in nature to TS's type definition file, but be more generic/js-engine agnostic so you could feed it to a JS engine so near-perfect type information can be fed to Chakra to specialize the compiled code if possible so it emits a near-perfect high-perf compiled version that the actual code can then use as a prepared, now specialized run-time ready to rock whenever the code actually does run. Sure JS is dynamic but helping the compiler do it's job when it's not required can't be bad, its just being compiler friendly if it does add any value or benefit. Whenever it finally happens you can almost enforce run-time security when JS engines support function freezing in the VM since the templated types can have the freeze keywords where we intended them to be, helping the run time code stay unmodified and for its coded intention only.
Good to see this since I apparently missed it... and I can't wait to get some more juicy brain-food a-la-Charles.
I asked the question of being able to offload compilation to the GPU since of course the type of data GPU's are good at doesn't fit today's GPU's almost at all, but the point is Windows NT was forward looking, and considering GPU's are becoming more and more general purpose with even being able to access shared memory; using forward thinking there may be a way to have the msbuild build system pre-process the code in a clever way so we can compile on our thousands of GPU cores. Right now they are tailored towards similar numeric data sure, but maybe with a tweak here or there we could finally use all or even a fration of that compute power to compile on GPU's instead of just doing all the compilation process on our CPU's. Id be a very happy and very impressed camper, since my GPU is being relatively idle while compiling anyway, and not like I have a huge compilation farm. But I rarely play a game AND compile at the same time since hey are both very resource hungry tasks so they are usually seperate tasks for me. So may as well try to offload some computation to the GPU during compilation process no? We need Dave Cutler joint with AMD/NVIDIA on the problem
Im also glad their planning to get parallel (better be without threads so the runtime can manage stuff) as I missed this live session and was going to ask about any paralellism of the JS code inside the Chakra engine, since it SHOULD be smart enough and know the ast and data flow of the generated code is to intelligently create safe concurrency.
You can use "uname -a, ls, dir, pwd etc" have a look in the /bin, /usr/bin or /usr/sbin directories to look at the programs you can run, even a little c compiler, so enjoy!
Yea... the biggest and most stupid problem is that type info is used to develop to help prevent bugs etc, and all that type info during dev time is, lost in between over the wire, then reconstructed afterwards in the engines.... COMPLETELY redundant since type into is 92% used on both sides.
This may be a bad idea since it WOULD add more payload but IDE's and browsers should work together to make some type/function signature/forward declaration hinting script standard for the browsers to use, the same type info that devs use to code with, that way at least the type info wouldn't be lost over the wire for 0 reason.
This all depends on if the download size negates time spend parsing and optimizing without types vs with types provided in he engines. Basically a function signature definition include file to give all type info for the browser to use as the skeleton of all functions to help runtime without needing to do type checking & inference.
In my opinion types are being added as needed, and their almost always needed; sounds like the reasons standards bodies began in the 1st place, to standardize common practice for interoperability. So yes there seems like a need exists here in some way or another as long as its optional for compatibility and it provides a real boost in some way, might be interesting.
Thank god Charles for another video, I've been waiting for something new from you man!
IE is REALLY getting better, completely incomparable versus the older generation of IE and since C++ AMP/OpenCL is quite sexy in terms of perf, I think one way to really make IE attractive to rip the competition to shreds is letting such a shim or interface to that type of hardware utilization since your IE11 is implementing WebGL so may as well get WebCL/AMP in there. Instead of playing catch up how about get a leg up?
Ya know... he brings up an amazing friggin point with the concept of "open-source IE". I actually believe MS SHOULD do this for IE6 only at this point in time. You guy's (MS) are going to end support of IE6 & XP etc soon enough SO Microsoft should, I believe with all my heart and logical conclusions, open source IE6 and given to the open source community to work on as a modern IE6 clone project to improve on as an option alternative to the actual IE6 rendering engine and JS Engine built into XP which is going to EOL.
The whole IE infrastructure and code I think is already all replaced with new-generation code and concepts so allowing an open source community to work on an IE6 clone built directly from the IE6 source and built to uphold IE6 terrorizing situation while helping a built-in IE6 -> modern web shim wouldn't infringe on any current generation stuff for IE9+. As IE6's stagnant web problem that Microsoft has created without malice is indeed still a massive problem, and everyone still on XP is making this be a huge issue continue to be an issue.
So, for administrators that must keep XP in-house or for home users, allowing an IE6 clone to maybe shim the architecture of IE6 while maintaining problems which are "bug-as-a-feature" status may actually help the web move forward just even a little. Doing so would also indicate that Microsoft may actually want to help dig the WWW out of the hole it accidentally created by showing good intention to the world by open sourcing IE6 by itself and re-invigorating just a tiny bit of never before seen trust seen before out of Microsoft history.