@Ivan: Each container have begin and end because this is the way C++ had to implement interfaces (C++ don't have the interface keyword like other languages) and decouple it from the algorithms.
I can be wrong but, Sort as a member function looks like that sort belongs only to that container and need be implemented by it. Its not "C++ don't like Object", it was the solution they find when developing a library that fits on most generic way possible. As a member it only make inflate the container with a functionally that not belongs to it. Witch type of sort algorithm to use if an algorithm fits on more than one type of container, replicate code? Even if you think in sort member as a way to shortcut a call to sort(s_begin, s_end, d_begin) it only looks duplicating code and defeat the purpose of iterator (why make a iterator interface if sort will be on the object anyway?)
It's like the pretty printer above, you could want overload ostream and operator <<, but other prefer a separate print(...). What I know from now reading more and more C++ books and articles (and I'm aligning to it) is make a print(...) could be more interesting for keep the algorithm generic, decoupled and self-contained. In the way to chose commodity or discipline, choose the later
@Ivan: C++ are based on 3 pilars: Containers, Iterators and Algorithm, this way they can provide a generic way to an algorithm like sort be applied to many type of containers through iterators interface. Here an except from sgi:
"Iterators are central to generic programming because they are an interface between containers and algorithms: algorithms typically take iterators as arguments, so a container need only provide a way to access its elements using iterators. This makes it possible to write a generic algorithm that operates on many different kinds of containers, even containers as different as a vector and a doubly linked list. "
Yes. As I'm living in the future with VC11, VC10 already feels old to me.
Oh, this is unfair Stephan, you are tempting us.
For next topics, I recently got a translated version of Effective C++ and I get fascinated with std::function and his friends (like bind) and how it interact with the containers. Expanding std::function and bind looks a nice topic, with examples of how wrap old/legacy algorithm with STL (one example i like is the sorting the vector with C str wrapped functions) or using it in some pattern like a visitor or observers.
As always, nice video, thank you @STL
@Charles, keep it comming
(edited/added) I just remember another topic that could be nice (but don't know if it fits on "advanced"):std::hash, and how make our own types container/STL friendly.
Having put about the virtual ISA I think now I understood better the part about it being minimal and portable.The restrict() keyword can be used to target the ISA, but in this case I think would be better call the restriction like sm_5, or dc_5, instead of "directx".
Unless the idea is to think ahead and call next restriction() with "opencl" and "cuda". But again, what version of DirectX, OpenCL and CUDA ISA?
@Speed8ump: Want to add my ¢0.02 here, the codes generated by the GPU compilers are near metal, but still a meta language that follow a strict ISA that are converted on the fly, take as example AMD Radeon HD 5xxx and 6xxx, some models have a SIMD width of 5 and the others a width of 4, and variable number of cores. When you generate the compiled shader (or now GPGPU code) you target the ISA and the driver apply the hardware specific needs.
HLSL/DirectCompute generate code for a type of ISA too, it grants the code runs on a variety of hardware that follow that interface, in the case, shader model 5 (with backward compatibility to 4 and 3)
CUDA and OpenCL uses the PTX.
The interesting of this ISA is it can be converted not only to GPU assembly but to x86 or any other instruction set too.
Brilliant, sure the 1st step on integrating C++ multiple CPU and GPU.
@Charles: I'm looking futher in this direction, current DC, OCL and CUDA all use a C-like language and all 3 demonstrated desire in move to C++like or have c++ friendly API. Now with the Unified Memory Addressing in CUDA I think the other 2 can also develop this feature and make the next step on integrating a fully massive parallelism (library/API). AMP being an open standard not doubt it can be ported to other platform combinations and influence the OCL/CUDA.
Again, great job. My fingers getting crazy to use it.
@PhrostByte: I don't see much difference between CPU and GPU with respect to SIMD, at least in the view of the bare metal (1 execution/shader unit :: 1 CPU; VLIW4/5 :: AVX register/instruction; GPGPU thread grid dispatcher :: Multicores and NUMA nodes). But the programming paradigm yes, this is quite different as it give you direct control over what is put in the cache levels (but that is something they want turn optional).
The Intel OpenCL SDK are a cool as they are providing an off-line visual LLVM compiler, where you can inspect how the code are generated in x86 SIMD (SSE4 and AVX).
My faith on those GPU talks is cause the Khronos, and others, are very interested in migrate from C for GPU do C++, but a C++ suitable for GPUs and CPUs at same time, CUDA already providing Unified Memory Address (pass pointer direct, no more host_ptr and device_ptr), lets see next version of OpenCL
. I think what you want is support for vectorization/vectorization library. Yea, vectorization support are nice but the ones I see around (gcc, Intel) still very close to C + compiler extensions. I hope things begin to get better after the C++ and GPU talk in the C&B and in the AMD Fusion Developer's Conference.
Actually I love the intrinsic header files (and it is the only way mix assembler code from 64bit), but I'm suspect to talk about it (I'm that type of crazy dev who like to "brush bits" sometimes )
@Charles:Well, I give a test drive to WWS, it's nice and all but with the trouble with SP1 I rolled back the WSDK and not installed it again Would be nice some more text about it, the examples that come along are good, but need a bit more insight on the documentation. More propaganda would help WWS widespread (like examples involving WWS and Windows Phone, sure it will be like honey for bring peoples attention )
(Of course, all implemented in modular way, for the sake of clean VS.next Standard edition, augumented only with the modules you buy/download from VS Module Market (yeah!, like app market))
From my side, I use C++ more for (self included) learning and teaching core language/STL, parallelism and graphics (GL, DX, CL), so I still on my old post on the Goodhew interview: a minimum Visual C++ IDE with Extensions enabled (one step further Express), plus a better modularized/decoupled Windows SDK, so we don't have more (or mitigate) the crashes with handling many versions of WSDK (like the headache we had with the latest SP1 for VS+WSDK).
A simple UI with direct2D would be awesome, but if you guys improve the interoperability already existent in Ribbon would already begood.