@Dev Some of what you said I can agree with, and some absolutely not. You want a strongly typed language? Try Haskell. Oh, BTW, Haskell relies heavily on type inference (your hated var). Can you write bad code with it? Yes, but that's true of every API, operator, keyword and feature.
You obviously have a strong idea of what "good maintainable code" looks like. Doesn't it make sense to use tools to enforce that, and make it easier? You hate var. Wouldn't you want something to ensure your team doesn't use it then?
Pretty good "webinar". In the video it was mentioned (paraphrasing) all DI containers do reflection. This actually isn't true. What's true is that MOST containers do reflection, but there are some out there that use no reflection. In fact, it just takes a few lines of code to create a container that doesn't use reflection at all.
var container = new Container();
container.Register<ILogger>(c => new Logger());
container.Register<Foo>(c => new Foo(c.Resolve<ILogger>()));
var foo = container.Resolve<Foo>();
public class Container
private readonly Dictionary<Type, Func<Container, object>> registry
= new Dictionary<Type, Func<Container, object>>();
public void Register<T>(Func<Container, T> factory) =>
registry.Add(typeof(T), c => factory(c));
public T Resolve<T>() => (T)registry[typeof(T)].Invoke(this);
public interface ILogger
void Log(string message);
public class Logger : ILogger
public void Log(string message) => message.Dump();
public class Foo
private readonly ILogger logger;
public Foo(ILogger logger) => this.logger = logger;
public void DoSomething() => logger.Log("Log something.");
I wrote that code in LinqPad, so there's some adjustments to make if you put this into a console application (replace the call to "Dump" with Console.WriteLine calls instead, basically), and I've used several new C# language features to condense the code as much as possible, but this should be very understandable to anyone. There's zero reflection in this code, and yet I'm using a container to resolve dependencies.
Ignoring the fact that this isn't production quality code (no error handling, for instance), this naïve container is fully usable in any application you'd write. The drawbacks: registration is more complex and there's some features missing, like the ability to tell the container to only ever create one instance of the ILogger no matter how many times you inject it, or to handle lifetime issues. There are some containers available in NuGet that properly handle errors and add some of the missing features, but continue to use the same "no reflection" design this simple code does.
As for concern about reflection cost if a container does use it... a good container does reflection only during registration/container build time. Internally it's using the same mechanism this naïve code uses, associating a type with a factory delegate, so when you Resolve there's no reflection at all. This means there's very little impact to the performance of your code (the only cost is a minor cost at startup and some dictionary lookups when you resolve). So the speaker was absolutely correct... you'll be hard pressed to notice any performance differences when you use a DI container.
I wasn't suggesting you trust StackOverflow... I was being lazy. The answers there referenced the specification, which is what I should have hunted down and linked to. As is always appropriate, you should trust the spec before any other source, which includes P&P. Especially when that page is so vague. I could be wrong, but what I believe the page is truly pointing out is that determining the order in which initialization occurs isn't always possible, much less easy to do. So, if your initialization depends on other static members being initialized, yes there can be race conditions or even other factors that cause you problems here. That's not the case in most Singleton classes, however.
I'll grant you that avoiding the DCL is an opinion... but one that's founded on a great deal of knowledge of this space. That P&P page that states "The common language runtime resolves issues related to using Double-Check Locking that are common in other environments." is correct, but remarkably misleading. The common language runtime provides enough guarantees and features to enable you to write the pattern correctly, but it's still a terribly complicated topic that most people don't understand and can easily code incorrectly. You've done so correctly here, and given good comments about what's needed, but in the video you demonstrate a lack of actually understanding it. Just my opinion, but when there's alternatives to such dangerous patterns we're better suited to use them instead, and most certainly should be teaching them.
On IDisposable, I ask again, when would you call Dispose? The only logical time you could do this would be at application shutdown, which is pointless. The act of shutting the application down will already clean things up. There's no point in explicitly doing so. Your example with NHibernate seems incomplete. When did they call Dispose in that example? I'm willing to bet they didn't.
I'd love to hear where P&P suggests you set a variable to null in a Dispose method. It's a pointless thing to do. If there were a reason to do this then EVERY class should implement IDisposable and we'd be writing a lot of boilerplate code to null out our members and wrapping absolutely everything in using statements. If you follow the "IDisposable pattern" you only implement IDisposable if your type contains other IDisposable types or directly uses unmanaged resources. Your Singleton doesn't, and should NOT be IDisposable, and far more importantly should not have a finalizer. There's significant cost to adding a finalizer to a class, and given we have safe handles (and the pattern we can follow for other resources) I believe the "IDisposable pattern" is broken and we should never (unless implementing a safe handle) implement a finalizer. That bit is opinion, but based on technical reasoning. Don't let that tangent derail your thinking here though, as it doesn't really have any bearing on the fact that you have no cause to use IDisposable here.
This has been a great series so far, but we just went off the rails. Figures we'd do so with the "anti-pattern" that is the Singleton. :)
First, I want to clear up a common misconception. The Singleton design pattern is probably the worst named of all of the design patterns, because according to the GoF book the pattern constrains the number of instances created. Note that it does NOT say there's only one instance. For example, a class used to talk to a server could be designed as a Singleton. If we know there's exactly two such servers we can talk to then exposing a ServerA and a ServerB property means we're still following the Singleton pattern even though we create an instance of the class for both of those. I know, not the best example of this, but strictly speaking this is what Singleton does. It restricts the number of instances, usually but not exclusively to one.
In your implementation you use the DCL. I firmly believe you should never do this. The nuances for getting it right (in many languages you can't) are complicated. In the video several times you admit to not fully understanding it, and make some declarations about this that are not entirely accurate (such as stating volatile will cause it to wait until the object is fully constructed... this statement shows you understand the issue exists, but not what that issue is exactly, or how this fix works). So, since there's really no reason to use this construct, it's better not to. In this case, static construction/initialization would deal with this for you. Don't agree that it does? OK, use Lazy<T> instead then. Don't rely on a very low level construct you don't fully understand here, use a higher level construct you can reason about instead.
You threw in IDisposable in this discussion for some reason, and really messed it up. Firstly, the whole point of a Singleton is that it will exist for the entire lifetime of the application, so there's exactly zero need to make it IDisposable. Who's going to call Dispose and when? The Main method before exit? What purpose does that serve? You probably made this mistake because it appears you don't understand the difference between IDisposable and finalizers. You seemed to indicate in the video that the GC will call Dispose. It won't. That's why your finalizer calls it. You did follow the "IDisposable pattern" correctly, but frankly that pattern is broken and wrong (explaining that statement would take too long here, but the reality is you probably should never code a finalizer yourself). However, your implementation for Dispose is pointless at best. Setting members to null in Dispose is in all likelihood a waste of time and effort. Theoretically this could allow an object to be collected sooner, but in reality this will hardly ever be true, and unless that object tracks unmanaged resources there's no benefit to it happening sooner anyway. No, Dispose is needed only if you have members that need to be disposed (or, ignoring my comment about the "IDisposable pattern" being broken, to release unmanaged resources). Bottom line, including IDisposable here was wrong, will confuse far too many people, and you didn't do it correctly in any event.
I try to stay out of them as well. I usually use pull, but unlike the site I posted that's on the other side, I'm not going to try and convince you I'm right. :) Understand the difference (there's not much) and decide for yourself which you prefer, because it is just a preference (unless you're in one of the edge case scenarios where you really do want to just compare or need to compare before deciding to merge).
To answer your question about the '/'... let's analyze both commands.
> git fetch origin master
This says to fetch the 'master' branch from the remote named 'origin'. What's not spelled out in the command is where it actually fetches to. It fetches the remote branch into a local branch that it names 'origin/master'. Maybe you're seeing the answer already. :)
> git merge origin/master
This says to merge the branch named 'origin/master' into the branch your currently in. This 'origin/master' branch is a local branch, not a branch on a remote.
So, 'origin/master' is just a branch naming convention used to distinguish branches that have been fetched from remote repositories.
BTW, I didn't care for your explanation on when to use fetch/merge vs. pull. If there are conflicts, a pull still allows you to resolve those conflicts, so that's not really a reason to use fetch/merge instead. Frankly, the difference is mostly just a religious debate. There's a good number of folks that recommend fetch/merge, such as this one: https://longair.net/blog/2009/04/16/git-fetch-and-merge/. The reasoning is highly opinionated, however, and I find it funny that most of the folks that hold this opinion do a fetch and then immediately do a merge without doing anything else in between. If that's what your'e doing then you're just typing more as that's exactly what a pull does. The only time fetching without pulling is beneficial is when you want to be able to compare your local version with the remote version before or even without merging. That's really it. I actually prefer to use pull most of the time for this very reason.
@AnilApex: There's no such thing as a "single core" or "multi-core" program. Any multi-threaded program will use as many cores as it can. And there's no magic a compiler can do to turn a single threaded program into a multi-threaded program.
@MadsTorgersen What happened to the contract ideas? Are we going to go yet another release without DbC support in the language? Why? Even the C++ committee is looking into this. For others, yes I'm aware of the Code Contracts framework from the research group, but there are many reasons why that just doesn't cut it. Chief among them is that no "third party" or "add on" solution is going to work... we need contract usage to be pervasive, which means it must be part of the language. The IL rewriting solution is also problematic... we need contract's implemented at the compilation phase.
@Max: Silverlight is dead. It's life was stretched out a little with Windows Phone (though that's a Silverlight of a different color), but even there they've moved on. Don't expect any news about Silverlight.
@Xpndable: Yeah, the fact that the REPL is a PowerShell REPL and not a C# REPL. Both may be using the .NET framework, but usage patterns are quite different. You can't learn C# by programming in VB.NET, which is really the equivalent of what you're suggesting.
When I say API I'm not interested in how it's implemented, so all of the talk about FromEventPattern is static. What I'm talking about is the public API, in this case the signature of WhenReadingChanged (again, I'm renaming so the name doesn't give an indication of IObservable or Task). The question is whether WhenReadingChanged should return a Task or an IObservable. I'm maintaining it should return an IObservable.
You are, however, correct when you ask if my point is that ReadingChangedObservable could have been used identically in both the async version and the Rx version. That's precisely the point. By defining your API (ReadingChangedObservable / ReadingChangedAsync / WhenReadingChanged / whatever) to return an IObservable you can use it with either Rx composition (appropriate when composing streams) or with async/await (appropriate when composing the "next" event as was done in several examples here). Contrast this with returning a Task. If you do that, you've lost the stream and can only compose with async/await (actually, we're totally glossing over composing with ContinueWith... the key is that await works not on Task but on awaitables, and both Task and IObservable are awaitables). IMHO, nothing is gained by returning Task, but a lot is lost, so you simply shouldn't return Task here.
You can't change the framework. By API, I mean *your* API. In this case, the WhenReadingChanged (or name of your choice) extension method. You still seem to be missing the crux of this, though. "(2) whether we should implement our logic using RX combinators or language combinators" sure seems to be missing my point. By returning an IObservable instead of a Task I have not limited you to Rx combinators. You can still use async/await to compose. What I've done is made the API (WhenReadingChanged) usable by both language and Rx combinators.
The guidelines as I see how they should be followed:
1. If the legacy event is part of an EAP implementation, wrap it with a Task, otherwise wrap it with an IObservable.
2. When composing, if you're composing streams you'll use Rx combinators, otherwise you should prefer language combinators using async/await.
(2) is a little oversimplified, so there's likely exceptions to be found, but 1 seems pretty obvious and I see no room for variation.