Thanks Mike. It sounds like registering contracts with the framework would be the core enabling technology for proper blame assignment and proactive failure prevention. It almost can be read as you guys are planning start working on that
I wonder if it is or someday would be possible to interrogate a method about its contracts at runtime, so the caller could ensure compliance before actually invoking the method? E.g. before sending big batch of data over the wire for pre-processing and loading into a database in one transaction, I get an abstract code tree from the transformation service that represents all or at least some of the checks and run them locally and perform corrective actions proactively.
Measurements in F# is an exciting feature. No doubt about it. I was really impressed when I learned about it being added to the language.
It surely addresses a lot of potential issues with measurement mismatch.
Yet, run time support for measurements still makes a lot of sense.
If you are reading data from external sources at run-time (files, sensors, or web services), you'd still have to implemet all the measurement tracking and conversion yourself. If this can be married to contract somehow, then application would just have to tell the framework that it expects mass to be in kilos, and if input feed turns out to be pounds then conversion would happen under the scene.
Another presumably useful feature would be to declare measures off of classes/types. If for example, I'm counting my chicken, I don't want to be able to inadvertently add this to the count of eggs, unless I explicitly coersed chicken and eggs to be "things".
Anyhow, compile time support is a very good start.
Great, you have it all covered. I'm sold
Now, this in-memory columnar database, is it going to be shipped along with Gemini only, or it eventually make it into some edition of SQL Server engine?
Sure, database query language doesn't have to be SQL (or arguably even shouldn't be SQL), but querying language aside, such a compelling feature as in-memory columnar storage looks very appealing as a generic service.
Columnar in-memory database storage, wow, that's some serious stuff!
A question though, when a dozen of users load their BI-mini apps onto the server and try to run them all, will there be 10 copies of in-memory database on the server (which might quite easily kill the server on the spot)? Or the server version of columnar database in not in-memory?
Good introductiory video. Not too many technical details though.
Are compute node SQL Server instances running the same code as the coordinator? Doesn't sound like they need to.
Is data auto partitioning going to be supported?
How Madison compares to now Oracle's Exadata?
What kind of storage (row oriented, column oriented) is used for compute nodes?
Coordinator still seems like a potential bottleneck, if 150 compute nodes start streaming back to the corrdinator, on a poorly scoped query there is still a good chance to food it with data. Are there any provisions for scaling out the coordinator, or it's vertical scaling for now?
Really looking forward to more videos on Madison (with a bt more details on internals ) .
Ah, wonders of computing. Ideas are simple, yet 90+% of all the implementation work is about discovering and handling all the border cases
Btw, what were those (two?) good books mentioned in passing on the subject of partial template specialization (a.k.a. template meta programming)?
Outstanding, it doesn't even look all that scary A lot of thanks, Louis!
Thank you for the pointers, Louis.
Is there anything that can be used in cases when binaries are coming not from the internal dev.team or a major vendor, like Microsoft, and there is no .pdbs immediately available? Is it possible to blindly search for a sequence of machine code instructions (naive signature matching)? Or in this case /gs injected code "optimized out beyond recognition"?