Yeah, the presenter should have switched the feed over so we could see what he was talking about. The other sessions did this.
|Coffeehouse||What is happening with XNA?||17||Jan 12, 2013 at 9:36PM|
I wish the microphones for the questions were wired up for the video recording. It was impossible to make out any of the questions on the playback.
@ClemensVasters: Right, but my point is you have to have a strategy that deals with change.
The best way to think about it is that the schema is a DSL which can be used to automate code functions like validation, serialization, GUI display, reports etc.. That DSL needs to be able to cope with minor and major changes, and expressive enough to deal with complex conditional optional structures.
This approach shouldn't be applied everywhere (if the size of the dataset is trivial). And, if the DSL isn't rich enough or the approach has failed before the solution is to fix the DSL rather than just resort to a non-machine readable schema document. If you go typeless you've removed the ability to automate a large part of your system.
I don't really think you can generalize on this. For simple interactions like the examples given, schema and a shared type is overkill. But in a large complex transactional system, with 100 or 1000s of fields it simply doesn't make sense to avoid schema.
If there's no machine readable schema then each developer has to write their own serializer and deserialize code. A process reading a complex message of 400 or more fields in a structured document would have verify the presence of each field before they use it. This can lead to monotamaus, buggy code.
It's far easier to automate the task of verifying the message against one of the valid schemas. Providing for basic for rules for non-break minor changes is a must. The validation code can be generated - and once you have a valid message you can deserialize it so that you can programmatically access the contents.
Resorting to manual message validation and serialization as a way of solving the issue of tight coupling is really solving the sympton rather than the cause. The real issue is to have a versioning strategy with a flexible automated code generation system that is designed for loosely coupled providers and consumers.
A good practical example of this is Google Protocol Buffers extension mechanism. With ProtoBufs you can partially deserialize from a code generated class and load an extension schema in dynamically at runtime to access additional data. Much like C#'s hybrid static and dynamic typing.
As for the static vs dynamic typing - surely a langauge that allows for both is the best.
This is great.
I'm a heavy CoffeeScript user and would definitely use this. Couple of suggestions:
- Line number correlation is important for debugging. I don't remember what happened to the standard proposal - but some browsers would going to support this is JS.
- Is there an option to minify? Now that more information is known I would expect a minifyier could do a better job.
- What about aync support? Would like to see something like Iced CoffeeScript.
- Is there an option to generate interface only .ts files for distributable libraries (essentially just the metadata)?
Also, would love to see a hosted dev environment for this
This looks great. But the problem is that it doesn't quite give you what the Web Essentials extension did before it. The CSS page inspector looks great, but changes aren't reflected automatically - you need to refresh. This is fine for most sites, but for Singla Page Applications, when on a dialog where the page's state isn't reflected in the URL it's a pain. You would need to fire up the dialog again. Why not support dynamic CSS changes?
Other than that - the new features look great.