I care more about being able to generate a script (say a PowerShell script) rather than sync across devices.
One reason is so I can share the script with colleagues. For example, one developer I work with has avoided upgrading his development machine from Windows 2008 R2 because he worries about how much work is involved in reinstalling Visual Studio.
If I can save scripts from the installer to a file on my disk, I can easily share that with my colleagues. Syncing through my MSDN account is useless to me.
Yes. How you'd do this, and how easy it is, depends on the technology. If the install was created with WiX and is a burn bundle, you can use dark to extract the MSI. Some installs have a command line parameter which extracts the files locally.
Most installs are in fact some type of archive with a wrapper, so something like 7z or IZArc can extract the resources, though it can take some investigation to figure out which is the MSI.
BTW, there is another tool that does the same thing as Orca: http://www.instedit.com/, but has some better capabilities and a more up to date UI. The free version is worth a look (not tried the paid version)
I've had to use Dependency Walker quite a lot to diagnose installs.
It's worth mentioning a few of the gotchas like making sure you match the architecture of Dependency Walker with the architecture of the PE you are analyzing, or that WinSxS typically throws lots of false positives into the list results.
@MadsTorgersen:Right! Forgot to raise that (you even mentioned async not supporting out parameters in the video).
@aarondandy: If we forget backward compatibility for a second, defaulting to non-nullable for ref types is IMHO the right choice (with some sort of modifier like the ? suffix for switching on nullability) when you need it. It'd be nice to get to a point where I don't have to remember to put the ! on everything, because you just know I'll forget. It'd be nice if getting there didn't require a whole new language.
The way to get there without a whole new language (which has probably been suggested in the discussions) is probably through some type of compile-time option. Mads talked about some sort of warning (which will often be ignored) or Roslyn checker (a la Dustin's C# Essentials). I think a code analysis rule (which can be made to trigger compiler warnings, errors, or just be ignored, on a project-by-project basis) probably gives the right semantics.
The reason I like tuples is for stuff that might fail.
Let's suppose that you have a method which connects to a service of some sort, and can either succeed and bring back data, or fail and bring back an error. How can we represent that in C# today? I can think of 3 ways, all bad:
Throw an exception, which is bad because it goes against the idea that exceptions are for exceptional circumstances.
Return a value which indicates the status of the method (a la DateTime.TryParse(string input, out DateTime result)), and use an out parameters, which is bad because out parameters are ugliest monstrosity ever.
Put a status property on the object which is returned (a la WebClient.GetResponse(....)) which is bad because the status of the request becomes conflated with the state of the object.
Compare any of these with something like (excuse the psuedo-ish code)
(Data d, Status s) GetDataFromService(....)
// called like so
(Data d, Status s) = GetDataFromService(....)