3 minutes ago, Bas wrote
@evildictaitor: I'm not saying your way is wrong, nor am I under any illusion that I'm in any way qualified to judge your method as 'wrong', if there is such a thing, but that approach sets off two alarm bells in my head.
The first one is complexity: an application that is basically just a place where you import a gajillion libraries sounds pretty complex to wrap your head around.
In theory, perhaps, but in practice the opposite is true. Since all libraries do exactly one thing, but do them well, you end up with less complexity. You also get the benefit of private classes for a library being explicitly hidden from the application, since they have assembly-scope.
What it really gives you is that it forces all of your little silos of code (like functional units and components) into formal API contracts that deliberately make it hard for business logic to end up in functional code or vice-versa. Under my scheme it's easier to build a good component that is easy to import (since that's how all components are used normally) than to have a sprawling inter-dependent mess for the simply reason that you only see high level exposed APIs that are designed to be easy to use and import, rather than designed to be coupled to the place you first invented it for.
Good API design is the key to success in the industry; it decouples your code, increases the ability for you to refactor your code later and improve it, and reduces the ability for code to "hack" at your component, increasing the hurt when you change the component later.
Secondly, I'm a firm believer in the whole "You aren't going to need it" thing. I used to write everything, everything I was working on as a reusable component/library, and I eventually realised that I spent a lot of my time on making things I wouldn't ever reuse reusable. Nowadays, I take the approach that I write it so that it works, and if I ever need something I know I've already written before, only then do I take what I wrote and make it reusable.
I never write code that I don't have a need for right now. This isn't so much an "in principle" thing, it's just that why write code that you might need in future when I'm so busy writing code that I do need right now. If I need that class, function or conditional-branch later, I'll code it then.
For instance, if I need some sort of special value converter for a weird databinding, I just write the convert method with some rudimentary type checking and that's it. If I'm working on something else that needs the same value converter, only then do I take the one I originally built, stick it in a library, make sure it's robust enough to survive anything thrown at it, and implement the ConvertBack method. Maybe I won't need it a third time, but the fact that I needed it twice is a good indicator to me that it needs to be reusable. The fact that I needed it once isn't. Obviously I haven't kept any metrics on this, but it feels like I've saved lots of time with this approach.
As I said before, if I have code that is highly coupled (by design) and clearly has no use on its own, it goes in the application (or nearest the code that needs it).
Example: Let's suppose I want to write a clone of Visual Studio.
1. Create a WinForms app.
2. On the designer, decide which of my UI controls I want, and import them as libraries.
3. Drag and drop them onto the canvas.
4. Set a couple of events, e.g. on menu handlers, key presses, mouse events or whatever exposed by the UI elements.
5. Let's link in a C# compiler, which is a library which imports a C# tokeniser and a C# linker
6. Pass the user data off to the library.
Now let's suppose we want to add an automatically syntax-highlighting control. This is relatively coupled to the app, but it's likely to ( a) be big, and (b) be something that someone else might want for a different app (e.g. syntax highlighting VB in Excel), so we create that as a different UI project. It imports the tokeniser base library which has an ITokeniser and which allows the application to pass it a C# tokeniser for syntax highlighting without coupling the syntax highligher control to the C# tokeniser. It also keeps from cluttering the namespace of people who import it because we can have a really small API footprint even if the code behind it is quite big, increasing the (re)usability of the library.
Now what if you want to add a HTML tokeniser? Well, add the HTML tokeniser you built two years ago when parsing HTML. Shazam, magically it works. Note that it all works because everyone depends on as few other libraries as possible, with the exception of the top-level Application who imports everything. That way, the C# tokeniser isn't tied to any UI stuff, the HtmlTokeniser isn't tied to the C# compiler, the DockingManager doesn't understand anything about your business logic and the application is clean and tidy because it describes how the app works without delving into the low level mechanics of how it works.
By tying your code together as libraries, you get crazy performance boosts to writing applications because your code automatically gets written to be imported and reused, and because of the assembly separation of libraries, it's deliberately hard for someone to rely on how the controls/functional units work, and they have to use the APIs; allowing greater decoupling of components, faster switching out of implementations for better ones and generally higher reuse of code. It also forces you to not think about UIs or business logic when you're in functional code, to not think about functional code or business logic when you're designing UIs, and to not think about UIs or functional code when writing business logic.
Maybe I'm not explaining it very well. But seriously. I credit that decision as one of the key reasons for the success of some of the programs that I've written, and I seriously would never turn back. It's a bit of a pain to start off with perhaps, but after a small number of libraries you'll find that you're easily making up the time wasted right clicking "New C# library" in a VS solution