Yes, Parmenio, there are plenty of people looking to model fundamental processes in physics as computations. Here's just one paper that I found in a few seconds of Binging :
http://arxiv.org/abs/quant-ph/0312067

In general, arxiv.org is a fantastic place to watch for new stuff coming down the pike.

Hi Erik -- you mentioned that the "sleight-of-hand" of induction is similar to the kind magicians use:

the fast-reverse (linear time) algorithm is exactly the one you would use to reverse a pack of playing cards by dealing them off the top, face-down, into another pile that starts off empty .

There are other famous algorithms that are isomorphic to manual procedures with playing cards, like quicksort, merge sort, binary search, etc. Working out such algorithms with a pack of cards in-hand is not completely crazy

Hi, Justin -- think of these as formulas-to-evaluate or equations-to-solve rather than as definitions-qua-rewrite rules. If you happen to have the values of the coordinates of some point in rectangular coordinates, then use the rectangular-to-polar conversion
equations to get the values of the coordinates of the same point in the polar coordinate system, and vice versa. Ditto the Jacobian equations. Operationally, use ONE of the pair as a definition and then think of the other of the pair as the inversion of the
definition, that is, the solution of an equation. Since these coordinate transformations are one-to-one and "onto" functions; that is bijections; that is invertable (except at singularities), then you can invert these equations almost everywhere on a manifold,
so it doesn't matter which of the pair you pick as the "definition". Hope that helps!

MathType from dessci dot com I believe will let you install the proper fonts for free. Turns out the boxes are just equals signs with little deltas on top, meaning "defined as," meaning it's not only an equation but a definition of the symbol on the left-hand-side.

Yes, exo -- composition implies dependency order, whether it's regular composition a-la f(g(x)) or Kleisli composition in a monad a-la

\a -> (ma >>= \b -> mb). Since the dependencies are explicit in the program text, then can be decomposed by program-to-program transformations (aka "rewrites").

The "ambient monad" just means that side-effecting ordinary-looking compositions act exactly as if they were monadic Kleisli compositions in a State monad comprising the entire address-space of the machine.

You are also right that threads break down the dependency abstraction, therefore are "outside" the ambient monad. Suppose one thread computes function f, and another thread computes function g. With explicit compositions, monadic or not, you HAVE to write
f(g(x)) or g(f(x)). If you just let threads go, you have no way of knowing whether f(g(x)) gets computed or g(f(x)) until after the fact . In pi-calculus, it's as if you wrote f(g(x)) | g(f(x)), that is, "hey, pick one of the two, I don't care which." It's
not a monadic expression, although with pi we can get a precise semantics for it by embracing the non-determinism.

The plain truth is that the compositional style gives all kinds of practical engineering advantages like reusability, modulartiy, maintainability, reducing cost over time. The free-wheeling style give us all kinds of performance advantages, at the cost of
compositionality. Both styles have their places. I am very fond of the idea of high-compositionality at the highest levels of abstraction where we need flexibility, and low-compositionality inside the VM where we need to squeeze value out of every cycle.

There are various ways of mapping explicit compositional dependency to time. One is lazy evaluation. Another is the "managed reference" machinery in Clojure. Another is the dual pair of IEnum* and IObserv*. They're all alive and healthy and loaded with possibilities.

## Comments

## E2E: Brian Beckman and Erik Meijer - Co/Contravariance in Physics and Programming, 2 of 3

Fantastic, Curt! thank you.

## E2E: Brian Beckman and Erik Meijer - Co/Contravariance in Physics and Programming, 2 of 3

Erik and I have started cooking up the last batch on this topic. Will have something to video soon.

## E2E: Brian Beckman and Erik Meijer - Co/Contravariance in Physics and Programming, 2 of 3

Yes, Parmenio, there are plenty of people looking to model fundamental processes in physics as computations. Here's just one paper that I found in a few seconds of Binging : http://arxiv.org/abs/quant-ph/0312067

In general, arxiv.org is a fantastic place to watch for new stuff coming down the pike.

## E2E: Brian Beckman and Erik Meijer - Co/Contravariance in Physics and Programming, 2 of 3

Sorry you didn't like it, bigbag.

## E2E: Brian Beckman and Erik Meijer - Co/Contravariance in Physics and Programming, 2 of 3

Spivak is the classic -- very terse, very beautiful, very out-of-print Hubbard & Hubbard is loaded with examples and remarks and exercises.

## E2E: Brian Beckman and Erik Meijer - Co/Contravariance in Physics and Programming, 2 of 3

yes yes, we need to finish up

## E2E: Brian Beckman and Erik Meijer - Co/Contravariance in Physics and Programming, 2 of 3

yes, ShiNoNoir -- the best, current source I know is "Vector Calculus, Linear Algebra, and Differential Forms" (VCLADF) by Hubbard & Hubbard, http://matrixeditions.com/UnifiedApproach4th.html .

## C9 Lectures: Dr. Erik Meijer - Functional Programming Fundamentals Chapter 13 of 13

Hi Erik -- you mentioned that the "sleight-of-hand" of induction is similar to the kind magicians use:

the fast-reverse (linear time) algorithm is exactly the one you would use to reverse a pack of playing cards by dealing them off the top, face-down, into another pile that starts off empty .

There are other famous algorithms that are isomorphic to manual procedures with playing cards, like quicksort, merge sort, binary search, etc. Working out such algorithms with a pack of cards in-hand is not completely crazy

## C9 Lectures: Dr. Brian Beckman - Covariance and Contravariance in Physics 1 of 1

Thanks for the link, Justin. I haven't seen that paper and will take a look!

## C9 Lectures: Dr. Brian Beckman - Covariance and Contravariance in Physics 1 of 1

Hi, Justin -- think of these as formulas-to-evaluate or equations-to-solve rather than as definitions-qua-rewrite rules. If you happen to have the values of the coordinates of some point in rectangular coordinates, then use the rectangular-to-polar conversion equations to get the values of the coordinates of the same point in the polar coordinate system, and vice versa. Ditto the Jacobian equations. Operationally, use ONE of the pair as a definition and then think of the other of the pair as the inversion of the definition, that is, the solution of an equation. Since these coordinate transformations are one-to-one and "onto" functions; that is bijections; that is invertable (except at singularities), then you can invert these equations almost everywhere on a manifold, so it doesn't matter which of the pair you pick as the "definition". Hope that helps!

## C9 Lectures: Dr. Brian Beckman - Covariance and Contravariance in Physics 1 of 1

MathType from dessci dot com I believe will let you install the proper fonts for free. Turns out the boxes are just equals signs with little deltas on top, meaning "defined as," meaning it's not only an equation but a definition of the symbol on the left-hand-side.

## C9 Conversations: Brian Beckman on Complexity

Yes, exo -- composition implies dependency order, whether it's regular composition a-la f(g(x)) or Kleisli composition in a monad a-la

\a -> (ma >>= \b -> mb). Since the dependencies are explicit in the program text, then can be decomposed by program-to-program transformations (aka "rewrites").

The "ambient monad" just means that side-effecting ordinary-looking compositions act exactly as if they were monadic Kleisli compositions in a State monad comprising the entire address-space of the machine.

You are also right that threads break down the dependency abstraction, therefore are "outside" the ambient monad. Suppose one thread computes function f, and another thread computes function g. With explicit compositions, monadic or not, you HAVE to write f(g(x)) or g(f(x)). If you just let threads go, you have no way of knowing whether f(g(x)) gets computed or g(f(x)) until after the fact . In pi-calculus, it's as if you wrote f(g(x)) | g(f(x)), that is, "hey, pick one of the two, I don't care which." It's not a monadic expression, although with pi we can get a precise semantics for it by embracing the non-determinism.

The plain truth is that the compositional style gives all kinds of practical engineering advantages like reusability, modulartiy, maintainability, reducing cost over time. The free-wheeling style give us all kinds of performance advantages, at the cost of compositionality. Both styles have their places. I am very fond of the idea of high-compositionality at the highest levels of abstraction where we need flexibility, and low-compositionality inside the VM where we need to squeeze value out of every cycle.

There are various ways of mapping explicit compositional dependency to time. One is lazy evaluation. Another is the "managed reference" machinery in Clojure. Another is the dual pair of IEnum* and IObserv*. They're all alive and healthy and loaded with possibilities.