1) I'd like to state exactly WHY functional programming is far easier parallelized, it's not because of immutability per-se, immutability help in a way, but the reason is something slightly more subtle. C++ and other compilable languages get assembled into
assembly instructions, that linear form is actually the problem. The instructions depend on instructions above other than the branch/jumps to find the final momentary code execution path. Functional languages on the other hand use LAMBDAS, which are self-contained
packages of instructions which are independent of other lambdas. The ability for one lambda/anonymous function to not rely on the others in the program other than possible data dependency's, is why you can run a lambda function independent of the predecessor
instructions or processor. Thus a lambda function's instructions are at a coarse enough grain that they aren't tied together by a built-in per-CPU hardware program counter like the imperative way of both a program`s structure and logic.
2) The other thing I've not been able to understand is WHY today we are limited to a single main function!!! To me, if code entry execution point and exit points could be referenced or indexed like a function table at the beginning of the file, after the magic
number and before the header meta-data, then the binary could/should be able to run different functions on different processing cores at the same time using hardware supported memory spaces per execution path for sharing memory. Instead of the one large traditional
main function which must start at one point in the code/binary and end at specified exit/exception points.
This is an alternate way in my head I've been wondering as a different approach to the "thread" way of software engineering.