Reifnir, thanks for the link, I'm on restricted internet right now but I'll check it out later.
I guess my point is that the Build demo on the Microservices architecture of Age of Ascent is what I recognise and have been building or evangelising in my day job, whereas Juval's talking about 1000s of services (!) and throwing out ideas like 'int as a service' and it feels to me like this isn't the same thing, it's almost philosophical, it's nanoservices, and perhaps an approach to a framework or fabric that I would build on, as opposed to build myself.
I'm currently working at a gig using npm, grunt, bower and friends on a large Angular app + WebAPI.
The front-end guys were Mac-based originally with some PC + VS2013 experience working remotely. They wrote the gruntfile and package.json etc.
The project has never built on the dev-team's machines. We're all on Windows, AD, group policies and proxy and firewall rules in a financial company.
We've had the max_path issue, we've continue to have proxy problems, but even with latest npm and using dedupe and being routed directly out to the web, we have issues just with npm package installations failing on "npm install" for reasons we cannot reasonably debug.
We've so far not been able to get automated build working for the front-end/site. This is obviously a problem for continuous integration and testing.
I would recommend treading very cautiously if you are to embark on a greenfield website on Windows and are mulling the use of these Linux/Mac technologies.
I think a major problem is that there are so few people consuming these tools and the packages from Windows that bugs are not found and there's a lack of interest in resolving them for such a small number of "Microsoft" devs.
Limit the amount of packages you use in the first place. Potentially have the front-end devs have their own gruntfile + packages.json
I also recommend checking the packages into source control and avoiding "npm install" once you have successfully cached the node_modules locally and deduped to a point where they're all <max_path, else the installers could just stop working at some future point.
Btw, I had to install Cntlm local proxy to get this to work and even then, some packages seem to call git in such a way that git ignores its own proxy settings. It's hard going and, to my mind, with the features in VS, not really winning us anything yet except that we could work with a "cooler" digital agency for the front-end who like to use Mac.
The ApplicationUserManager.Create method. You fail to firstly mention that its a factory method for the class and to discuss the arguments being sent in, what the options are needed for.
The method also news-up its own UserStore instance, which is confusing because I thought the UserStore was the dependency seam, so I'd have expected it to be passed in: you explained that a UserManager does CRUD on users via UserStore which is a repository (though you didn't use that recognisable term). So I'd have thought it would be parameter injected.
You also talk about the defaults being set in code, as opposed to config file, but you say its better because config files are not compiled-in. Which is the whole point and benefit of config files. This code is not production ready but instead of highlighting that, you pretend its better its better to have to recompile and redeploy an app to change some settings.
I disagree with Winslow, the presenters are just being friendly, and they are.
For the record, I'm thankful for these videos, they're really helping to unravel the mountains of generated junk I now have to try and rework in my new site - no one just starts using templates without making it conform to the ways of your team.
I've never before started an MVC project and then immediately needed to take the day off to watch several videos about all the crap in the template.
But specifically about this video, I would really warn against taking the view that "magic" will mean we don't need to know anything.
Quote: "This is just template code, you don't necessarily have to understand what's going on here."
Man. I need to know what that code is doing. Also, it's why I'm watching this video. It's this thinking that's left the template with such a lack of comments, both proper XML comments and tons of prose about what's going on and why. Who reviewed this template?
Please stop being happy about framework magic. It makes us stupid and its not discoverable.
Also, you don't explain Owin within the context of IIS. Although I've followed Owin and Katana for a while, I was under the impression that Owin was an alternative pipeline that was only used when not on IIS. I guess at some point, we have to move across. Are we running two pipelines now? What's the deal? If this is new and the future, and its so important for auth, then it needs more focus.
I think you needed to spend a bit more time in the opening minutes on comparing the old world with the new. How Owin is exposed to and permeates an MVC 5 application today.
There are some odd things going on in the template (no comments of course) like in the ChallengeResult, where it's seemingly having to communicate a response to two pipelines; it reaches down to Owin to send a Challenge and then also returns a traditional response message. It's odd.
I'm sorry to sound so down, but we have to continually keep up with changes across the entire breadth of the .NET stack, plus Azure (OMG Azure is so huge so fast), JS, HTML, as well as deliver code, pretend we like the latest coding fad, learn an existing codebase, and keep abreast of change in the industry sectors we work in, its very hard work.
Woah. Major no-no in here. Do not use Parallel.ForEach to run a query across each of the shards. Parallel.For and ForEach will use a thread per core on the machine and then inject a new thread only on detection of a blocked thread, once per second.
It is NOT a convenient way to spawn, say, 10 tasks to run on 10 shards. You are not issuing those queries all at once.
To genuinely issue all the queries at once (and you should), you must "manually" create and run 10 Task instances for each shard, stick them all in a collection and wait on them all.
What's the Microsoft hardware story for someone trying to utilise GPIO and developing in C#?
For example, I'd like to write a system to control a pumping station, I'd like to host a REST service on the device itself using Katana/OWIN and control valve states via GPIO. I'd like to use managed Azure libraries to talk to the cloud. I'd like to run Windows, the full .NET Framework and code it all using C#.
At present, my option is Raspberry Pi and Mono with RaspberryPi.Net library. Do these boards change any of that? The missing component appears to be a managed libraries for the IO capabilities on the boards.