@Charles: I've been using Rx where ever I can, but haven't had as much opportunity as I'd like up to now; however I firmly believe that's changing a lot, both in my life and in the development community in general, as the requirement and desire for asynchronous code becomes more commonplace. It will be more commonplace in large part because of things like TPL, Rx and the new Asynch CTP, as it's becoming something that you can do in a reasonable amount of time and far less code, once you understand the concepts.
It was pretty much something you'd avoid if at all possible before, because of the complexity of the code, especially if you are trying to compose asynch computations ... that was a nightmare. Rx greatly simplifies the composition of asynchronous computations, so it makes you more willing to try in the first place. Once you start doing that, your programs become much more responsive, so you start to think about everywhere you may be able to do something asynchronously, or event based.
I still have a lot to learn about it, and really have to write more code using it. I've used Rx for plenty of event handling, but that's the easy stuff. Beyond that, I've used it in "real life" code for asychronously reading in CSV files and processing them. It worked well, but was difficult to get right. Jeffrey van Gogh then wrote some blog posts on the problem which I wish existed before, but I learned a lot from that. I've also used Rx for unit testing by simulating event streams. I'd like to do that a bit more in the future, with the newer releases of Rx, because they have some features that would be very useful for that.
I hope I'll soon get to use it much more, and expect I will. I'm in the middle of transitioning to another dev team, a significant web application, so hopefully I'll find opportunity to exploit Rx in a real, significant product soon. I imagine RxJs will apply a great deal.
When you start using Rx a lot, it changes the way you code quite dramatically. It's fun to puzzle things out. It really is mind-bending. There's the simple stuff, and there's the incredibly complex side where you are thinking about monads all the time, and you have to make marble diagrams to figure out what the heck is happening. A lot of developers are scared of it. I love to learn about this stuff, but it's not easy. A lot of developers I work with don't yet really understand LINQ and lambda expressions, and so forth yet, let alone monads. It's important to understand LINQ to appreciate Rx, and functional programming concepts definitely helps, as well as trying to understand monads.
When I get into deep discussions about WHY Rx and so on, vs. just doing it the old, hard way, ultimately I have to bring up things like side-effects, closures, continuations, CPS and eventually monads, to explain why it provides true compositionality vs., the old way. In the end I find that is the hardest thing to really get into someone's head, because I end up having to use some mathematics to explain functional composition, and how LINQ and Rx help you achieve it. Everything else is something you can do the "old" way, but true compositionality is only acheivable via monads (at least AFAIK).
I didn't get to see any live coverage this year, unfortunetely. However, I will say that the set up with the channels and everything looked awesome. Too bad I didn't get to experience it beyond just checking it out. Also, I'm in the middle of a move and haven't got internet access yet, so I haven't been able to watch any pre-recorded stuff yet either. Soon.
ehhh....like this post ? it looks like they did ask you for a sample file ......
A month and a half later ....
I agree with this, but it's interesting, if you think about it, how rarely it really helps to have the type specified instead of var. Consider:
String foo = "Hello"; // after another 50 lines of code ... DoSomethingWithFoo(foo);
Here, the type information is pretty useless after the initial declaration. Without an IDE, you'd only have the option of using Find in Notepad or whatever, to search for foo's declaration, to see that it's a string. Now consider this:
var foo = new Foo(); foo.DoSomething();
How do you know DoSomething exists on Foo, even if you know var foo is a Foo? My point being that you don't know anything about a type unless you happen to have knowledge of that type, without an IDE or Find to search things out about it. If you know what Foo is, then great, it's nice to see the foo is actually a Foo, and if you named it foo, then it's obvious in your own code what that foo is, but if you call it f, you won't know it's a Foo, unless you search or use an IDE to tell you. The only types you really know, and for which type information is helpful in declarations, is primitives and commonly used types, ones you've made, and remember.
See what I mean? We live in a world with more than just a few primitives and a small framework to learn and know ... we live in a world of ever increasing web services and types and we will always need to look at the type information using tools, to really know what the heck they are. Why litter the code with that information? Let the IDE do the job. Once you look up the info via tooltips or docs, or Go To Definition, or Code Definition Window, you're good to go.
Just my 2 cents.
So var is unusable when you are using any kind of exception handling around initialisation? That seems like a massive drawback to its usefulness.
You're stuck between a rock and hard place, you need to define variables outside of the exception handled blocks (since the compiler cannot predict the route the execution will take within that context) but to a var you must initialise them to a type in order to use them.
How about this example, how would you refactor this into a var version of the same?12345678910111213141516171819
StreamReader sr =
content = sr.ReadToEnd();
+ ex.Source +
" Message: "
You can use the default keyword:
var sr = default(StreamReader); // etc...
Astrophysics experiments are mainly observations ... they build expensive, giant telescopes and send some of them expensive, giant telescopes into space to observe things and look for signs of things like black holes based on their theorectical impact on surrounding light emitted from stars and things like that. That's experiment.
Einstein is famous for "thought experiments", and those are experiments too. Then they take the theory based off of the thought experiments and test the theory against observable phenomena, like gravitational lensing during a solar eclipse, or frame dragging using satellites. So, yeah, there is experiment there too, it HAS to have an experiment to validify theory, otherwise it isn't science, it's just a guess.
And when Einstein and his buddy Poldolsky, and Rosen, IIRC were hanging out one day talking about how much b.s. quantum mechanics was they invented experiments to prove it wrong, saying, "hey QM proponents, what about this weird effect, where two particles would communicate faster than the speed of light if you set up an experiment like this ... that's just 'spooky' and shows there's no way QM is true", and then in fact experimentalists built the experiment to show that entanglement was an actual real physical phenomena. Theorists like that have to invent experiments to test their ideas, the experimentalists build them.