Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Comments

CornedBee CornedBee
  • Stephan T. Lavavej - Core C++, 8 of n

    Functional notation is, sadly, semantically equivalent to a C-style cast, with all the dangers that brings. So while it looks nice, it's not a good idea to do it.

    In my opinion, functional casts should be static_casts by semantics, but unfortunately we're fixed in a world where that isn't the case. I suspect that functional cast notation is older than the limited C++ casts, and by the time the verbose syntax was introduced, it was already too late to change the meaning of functional cast notation.

  • Stephan T. Lavavej - Core C++, 8 of n

    > I still don't understand why the pointer-to-member operators .* and ->* have such low precedence; they're lower than . and -> and they're right above multiplication.

    It makes parsers more complicated to mix binary operator and unary operator precedences.I suspect this was the main reason.

    It would so be worth it, though. I think it would work, too. Of course, it's way too late to change it now.

  • Stephan T. Lavavej - Core C++, 8 of n

    And finally, some comments on the sorter. Looks like a fun exercise.

    sizeof... is not just syntactic sugar, though. A manually implemented sizeof... would have linear "runtime" (number of instantiations), whereas the built-in sizeof... is O(1). (One big issue of variadics as they are is that compilation tends to be very slow, due to lack of random access into arguments packs. Some guy (I think from Boost) studied this and found that compilation speed of Boost.Tuple (a preprorcessor-powered non-variadic tuple) compiled significantly faster than a naive variadics-based version.)

    BisectHelper's first specialization is overly verbose. Why not just do this?

    template <typename Ints1, typename Ints2> struct BisectHelper<Ints1, Ints2, true> {
      typedef Ints1 first;
      typedef Ints2 second;
    };

    Or even put the typedefs in the primary template and static_assert that Done == true, to capture implementation bugs. The only thing that the verbose version you have gives you is detecting when somebody instantiates BisectHelper with something other than a Ints specialization. Is that worth it?

    Concat is also too complicated. Or rather, it is a more complicated primitive than you need. The only place where you use Concat is in MergeHelper, where you use it as Concat<Ints<N>, something>, where N is a single int. In other words, you never give more than a single int as the first parameter to Concat. Why not save yourself the Ints instantiations and use a Prepend primitive instead?

    template <int A, typename Ints> struct Prepend {};template <int A, int... Vals> struct Prepend<A, Ints<Vals...>> { typedef Ints<A, Vals...> type; };

  • Stephan T. Lavavej - Core C++, 8 of n

    OK, finished watching the lecture.

    On the ODR rule: here's another fun thing you can do wrong in header files: unnamed namespaces. The common wisdom is not to use them in header files, and there are two reasons for that. The underlying problem is that an unnamed namespace internally gets a name that is unique not to the file it is in, but the translation unit. This means that when included from foo.cpp, an unnamed namespace in a header gets one name, and when included from bar.cpp, it gets another.

    One problem with that is the duplication of everything in that namespace. The symbols are no longer the same, and thus won't be thrown out by the linker. This wastes space, and can be extremely confusing when you define variables there - every translation unit gets its own global variable that the other TUs cannot access. But the behavior, while confusing, is still defined.

    But then you might add an inline function in that header that accesses something in the unnamed namespace. And now you're in trouble with the standard. Here's some actual code:

    namespace {
      struct X { int y; };
    }
    
    inline int foo(int i) {
      X x = { i };
      return x.i;
    }

    Looks harmless enough, right? The inline function is defined in the header, so obviously it's going to be the same in all translation units. Actually, no. When included from foo.cpp, there's a foo that references <foo-unnamed-namespace>::X, whereas when included from bar.cpp, there's a foo that references <bar-unnamed-namespace>::X. So these two are actually different, even though they are textually equal. ODR is violated, no diagnostic required.

     

    Of course, it's very simple to avoid this. Just don't use unnamed namespaces in headers. Ever.

  • Stephan T. Lavavej - Core C++, 8 of n

    , STL wrote

    I apologize for the confusion. Have you ever thought about applying to work on the compiler team? :->

    Yes, but I didn't particularly want to leave Vienna. Zurich, where I am now, is already quite far in my opinion. (Yeah, I'm rather attached to my home city.)

  • Stephan T. Lavavej - Core C++, 8 of n

    On a different issue, C-style casts are even nastier than you've described. Here's two more nasty things they do.

    First, they are even more desperate than a reinterpret_cast. A reinterpret_cast at least promises to preserve constness on pointer conversions. (You can lose constness by converting to an integer and back, but at least that requires two reinterpret_casts.) But a C-style cast doesn't do that; it will do the work of both a reinterpret_cast and a const_cast.

    But it gets even more desperate! It ignores access specifiers in order to achieve its goals. Imagine you have this:

    class Derived : public NonEmptyBase, private Base {};

    Note the private inheritance. You cannot implicitly cast a Derived* to a Base*, nor can you static_cast a Base* to a Derived*. You can reinterpret_cast them, but that's a bitpattern cast, not a hierarchy cast.

    But a C-style cast will do the hierarchy cast. Yes, that weird thing will actually ignore the fact that the relationship between Derived and Base ought to be invisible, and will do a hierarchy cast instead of falling back to a reinterpret_cast. I'm not sure if that's a good thing or not (at least it kinda works), but it's definitely weird.

     

    Second, the missing hierarchy trap is worse that you described. To recap, you said that (Derived*)base_ptr is dangerous because, due to a programmer error, Derived might not actually derive from Base, but the C-style cast will happily fall back to a reinterpret_cast in this situation.

    Well, Derived might actually be derived from Base, and this might *still* happen!

    static_cast reverses implicit conversions, with some exceptions. It can't reverse added cv-qualifiers. It can't reverse array and function decay. It can't reverse constant 0 to null pointer conversions. And it can't reverse a number of boolean conversions.

    And it can't reverse a hierarchy cast through a virtual base.

    class Base {}; class Derived : public virtual Base {};
    Base* pb = new Derived(); // valid
    Derived* pd = static_cast<Derived*>(pb); // doesn't compile
    Derived* pd2 = (Derived*)pb; // reinterprets!

    The reason is that in the object layout of virtual inheritance, given a Derived*, I can find the Base subobject via either following a pointer in the Derived*, or adding an offset specified in the vtable to the this pointer. (Depending on the implementation.) But finding the Derived object in the actual object given only a Base*, I cannot do the reverse. The vtable for Base doesn't contain an offset to the Derived superobject, because Base might not have a Derived superobject. So I have to invoke the full dynamic_cast machinery anyway, just to find out where the Derived superobject lives.

    Which can fail at runtime. static_cast must not fail at runtime. It must be simple and cheap. So it fails at compile time instead.

    Whereas the C-style cast just moves on and does a reinterpret_cast, which is most definitely not the right thing.

    This means that, if you have a C-style hierarchy cast, you can turn your program from well-defined to silently and nastily undefined simply by adding "virtual" to some class definition. If you had used static_cast, at least it would only fail to compile, and you could use a dynamic_cast there.

    Now don't you wish you hadn't used the C-style cast?

  • Stephan T. Lavavej - Core C++, 8 of n

    , STL wrote

    Consider what would happen if someone defined a Bignum templated on Allocator, and overloaded max(const Bignum<Allocator>&, const Bignum<Allocator>&). When you say max<long long>, name lookup will find all of the possible maxes that are in scope (you may have a using namespace std; and a using namespace BigMath;). Then because you've provided explicit template arguments, it'll bypass template argument deduction and directly try to substitute them in. The one in std results in max(const long long&, const long long&) which you want. But the compiler will also generate the signature max(const Bignum<long long>&, const Bignum<long long>&) because overload resolution happens *later*. It is quite possible for Bignum<long long> to explode because long long is not an Allocator - that is, merely forming that type can cause a compiler error, even though you aren't calling any Bignum members and the whole overload is nonviable anyways. Such a compiler error does not trigger SFINAE and the whole compilation fails.

    If the compiler does that, it's buggy. Overload resolution only requires the signature of the function to be instantiated; it's definition won't be instantiated unless the function is actually selected.

    Since only the signature of the function is instantiated, we only need the minimum requirements for the argument types. Basically, we can write the function prototype as instantiated and reason about that:

    const Bignum<long long>& max(const Bignum<long long>&, const Bignum<long long>&);

    Since these are references (and also function parameters/return values in a prototype), we don't need the complete type of Bignum<long long>. So only the class declaration is instantiated, not the class definition. In other words, since it would be sufficient to write

    class Bignum_long_long;

    if Bignum wasn't a template, but specialized copy&pasted code, the compiler won't write more code than that from the template either. In particular, it won't try to instantiate the definition of the class, and therefore any errors resulting from that (such as attempting to access long long::pointer, which would be likely in the case of an Allocator parameter) cannot happen.

     

    It's a different story if the template in question actually needs to be fully instantiated. For example, if the method had some enable_if check that looked into Bignum:

    typename std::enable_if<is_standard_pointer<typename Bignum<Alloc>::pointer>::value,const Bignum<Alloc>&>::type max(const Bignum<Alloc>&, const Bignum<Alloc>&);

    Now it needs to fully instantiate Bignum's definition even when only instantiating the signature, in order to find out what Bignum<Alloc>::pointer is. Now the compiler may fail.

    But not before.

  • C++ and Beyond 2012: Andrei Alexandrescu - Systematic Error Handling in C++

    And to elaborate on sellibitze's answer this:

    template <class F,
    class R = typename std::decay<decltype(std::declval<F>()())>::type
    >
    Expected<R> expFromCode(F fun)

    can be more easily written as

    template <class F>
    auto expFromCode(F fun) -> Expected<decltype(fun())>

    I left the std::decay off because I don't see why the function's return type needs to decay.

  • GoingNative 9: LINQ for C/C++, Native Rx, Meet Aaron Lahman

    @schroedl: Yes, function-form begin/end and the range-based for-loop are designed to automatically pick up anything that has begin/end members, so they should just work out of the box with this library. No idea why the example didn't use range-based for.

     

  • C9 Lectures: Stephan T Lavavej - Advanced STL, 6 of 6

    Ivan: check out Boost.Range. It allows you to do this:

    using namespace boost::range::algorithm; // can't remember exact namespace
    sort(some_vector);
    auto r = equal_range(some_vector, 10);
    for_each(r, [](int i) { std::cout << i << std::endl; });