abir: Your #1 successfully compiles with our latest compiler build. (They've fixed several bugs in alias templates after the CTP snapshot was taken, most of which were found while we were implementing std::integer_sequence.)
I've filed your #2 as DevDiv#835786 in our internal database, and your #3 as DevDiv#835794.
Thanks, I'll ask the compiler dev who wrote constexpr. That's emitting "warning C4425: 'const char (&)[N]' : 'constexpr' was ignored (class literal types are not yet supported)" which at the very least is an inaccurate warning.
I've confirmed that the CTP is targeting C++11 constexpr (minus member functions), not C++14's extended rules which will require more infrastructure work in the compiler. They're working on it, but we can't promise anything for the next RTM.
Sebastian Redl> The non-await version of the code at the end is full of errors!
Yeah, that was just a quick sketch that Deon sent to me. I was all, "I'm filming in an hour, can you give me anything for await?"
I forgot to ask whether we had implemented C++11 or C++14 constexpr for the CTP, much less what our plans for the next RTM are. (Before filming my video, I dropped by all of the compiler devs' offices and interrogated them about their features, hence the detailed caveats I presented - I should have remembered about constexpr's spec changing but I didn't.) I'll find out.
NotFredSafe> Printing the 101 inside test2() is kind of redundant, because if zeroth didn't return a reference, the expression ++zeroth(v) would not even compile, right?
Yes for integers, no in general. Given int yinsh(), ++yinsh() is ill-formed (N3797 5.3.2 [expr.pre.incr]/1, "The operand shall be a modifiable lvalue.", and the expression yinsh() is a prvalue). However, given UDT zertz(), ++zertz() will be well-formed if the UDT overloads op++(). (For example, deque<T>::iterator.) This is because the preincrement is fancy syntax for a member function call, and non-const member functions can be called on temporaries (prvalues) unless specifically forbidden with ref-qualifiers (which didn't exist in C++98/03).
Mostly though, I considered it more fun to print something than to say "look, it compiles!".
Matt_PD> how viable is running VC 2012 Ultimate, VC 2013 Pro RTM, and VC Nov 2013 CTP (all three) alongside each other?
Running 2012 RTM/Update N and 2013 RTM side-by-side is supported and extensively tested. (As usual, there may be bugs in obscure corner cases.) The Nov 2013 CTP requires 2013 RTM, and *should* be unaffected by 2012's presence.
> -- any chance of reusing Boost Binaries built for MSVC 2013 in MSVC Nov 2013 CTP
Because this is compiler-only, you can probably get away with mixing-and-matching. I wouldn't recommend it, though.
> (if binary compatibility isn't expected, as I vaguely seem to recall, any experience with building Boost in MSVC Nov 2013 CTP? :>)
I believe the compiler front-end test team has been building Boost, but I don't know the details of their work.
Simon Buchan> You mentioned constexpr has restrictions in this release (better than it ICEing!) - are those defined anywhere yet?
It's not available on (non-static) member functions, including constructors. (I confirmed this with the compiler dev who implemented it.)
> 1) What do you mean by "(One optimization I currently don't do is to store bits in the distribution object
If you give me an mt19937 (32-bit input) and ask for [0, 32) (5-bit output), I run the engine once for each output, throwing away 27 bits every time. That is suboptimal.
> Am I right(not sure I understand you 100% correctly) that for evil values of output range this could happen almost 50% of the time
Yes. This happens when the output range is slightly greater than a power of 2, and around half of the input range. For example, mapping [0, 2^32) to [0, 2^31 + 5) is going to trigger a fair number of reruns.
With more math and more complexity I could reduce this (especially by accumulating bits). When I reimplemented uniform_int_distribution my focus was on correctness, not speed.
> So isnt it better to get 1 or 2 bits more
In practice, the input range is 2^32 or 2^64, so we're starting with plenty of bits for most outputs.
> if generator is providing [0,30](31 distinct) and you need numbers [0,15](16 distinct) how do you do that...
I use a multi-step process:
* Using subtraction, I alter the generator's range to start from 0.
* I figure out the highest power of 2 less than or equal to the (altered) maximum.
* I throw away any value that's higher than that power of 2, as it is useless to me (in which case I rerun the generator). For [0, 1729] this would throw away anything outside [0, 1024). For [0, 30] this would throw away anything outside [0, 16). The worst case is having to throw away about half of the values (and it is nearly theoretical since non-power-of-two URNGs are crazy).
* Then I concatenate bits until I have enough for the output. For example, if you give me a generator that produces 4 random bits and I need 30 random bits, I need to accumulate bits. (One optimization I currently don't do is to store bits in the distribution object; I didn't notice this was possible until recently.)
* I do math on the desired output range, to produce a range that's unsigned and 0-based.
* I take my accumulated random bits and use modulo to wrap them around to the (altered) output range. But I do math to figure out which values are unbiased. If I would get a biased value, I throw it away and start over.
* Finally I translate the value to the potentially signed, potentially non-0-based desired output range.