@petke, @ryanb: The C++ language standard has no notion of "encoding". Moreover, it has no notion of "text" and "binary representation of text". Any explicit Unicode-related operations are strictly outside the scope of the language.
What C/C++ do do is to acknowledge that the historically misnamed type `char`, which should probably have been called `byte`, is insufficient for textual purposes, and it provides a platform-dependent, unspecified textual type `wchar_t` to hold a platform-dependent text character primitive. Acknowledging further that the outside world communicates in byte streams, the standard assumes that there exists a (platform-dependent) method to translate between fixed-width wide strings and variable-width byte streams, provided by `mbsrtowcs` and `wcsrtombs`. (I wrote a little rant about this on SO a while ago.)
In this sense, the native MSVC environment only supports UCS-2, or otherwise it gives you varibable-width 16-bit strings internally with no genuine "character" functions built into the language. It's also worth noting that most file systems accept null-terminated byte strings as file names (again with no notion of encoding or textual semantics), and NTFS is one of the rare kind to use null-terminated 16-bit strings (not: unicode) as filenames, necessitating non-standard file functions like `_wfopen`.
Personally, I think the notion of "text" is just very high-level, and instead of burdening a general purpose language like C or C++ with it, it's best left to something like ICU that can perform normalization and other deep text operations.