C9 Lectures: Dr. Erik Meijer  Functional Programming Fundamentals Chapter 5 of 13
Description
In Chapter 5, Dr. Meijer introduces and digs into List Comprehensions. In mathematics, comprehension notation is used to construct new sets from old sets. In Haskell, you can create new lists from old lists using a similar
comprehension syntax:
[x^2  x < [1..5]]
The above notation represents the list [1,4,9,16,25] of all numbers x^2 such that x is an element of the list [1..5]. The < [1..5] syntax is known as a
generator and list comprehensions can have mulitple generators that can have explicit dependencies on other generators. You will also learn about
guards, which restrict values created by earlier generators.
You should watch these in sequence (or skip around depending on your curent level of knowledge in this domain):
Chapter 1
Chapter 2
Chapter 3
Chapter 4
Chapter 5
Chapter 6
Chapter 7
Chapter 8
Chapter 9
Chapter 10
Chapter 11
Chapter 12
Chapter 13
Download
Right click or Alt+Enter to download this episode
 MP3 (29.2 MB)
 Low Quality MP4 (70.9 MB)
 Mid Quality WMV (121.1 MB)
 High Quality MP4 (216.2 MB)
 Mid Quality MP4 (333.5 MB)
 High Quality WMV (711.7 MB)
The Discussion

At last Thursday is here.
Excelent presentation as always. The only thing is that you missed an excelent opportunity to implement "where" using a list comprehension.
where' :: [a] > (Int > a > Bool) > [a] where' xs p = [x  (x,i) < xs', p i x ] where xs' = zip xs [0..]
Thanks again.
I'm looking forward to the next lecture.

I love IEnumerable<char> and recently used it for codegeneration from XML to a simple Pascallike language that generates the same XML  although I had some issue with it at first  as if Intellisense didn't recognize this fact that String : IEnumerable<char>.
I'm watching the ICFP videos every evening although quite a few of them are so deeply theoretical that you cannot understand them without quite some CS grounding. Nevertheless, there are also quite a few that are more accessible and show you where the future is, where the Haskell compiler development is, unusual uses of monads etc. So every interesting indeed.
Especially the oscarwinning mathematician from Industrial Light & Magic using monads for untangling knots, describing quantum computation, etc.  that was extremely interesting!

Here's the link for the ICFP 2009 videos.
Thanks for the great lecture series Erik. Converting the Haskell examples into their .NET equivalents really helps drive home some of the concepts for me. I also found your comparison between functional programming and the GOF design patterns quite enlightening.
I am waiting to see if Brian Beckman will "pop" in for the Monads lecture?

You never know when the Wizard of Monad will appear....
C

Say monad thrice and the Beckman genie is bound by genie law to escape from his Klein bottle and do some monad tricks!
This is of course a lecture series but how wonderful it would be if there would be a few interruptions along the way but of course the Beckman genie is also busy
I loved his input on the IO E2E video where he showed in simplest terms what duality really means. The greatest minds find the simplest solutions.  And make them efficient.

This is the first of the tutorials to really get into the "meat" of Haskell and I enjoyed it quite a bit (especially the fact that I was able to write the solutions with only a few minor missteps in syntax). I may become a Haskell convert: writing type signatures does make the code that follows much easier to construct.

I take it you meant Enumerable.Range, not Observable.Range?

I bet that was on purpose, to show off the new Observable extensions to .Net. Even if that was probably not especially suitable in this case, there's no reason why it wouldn't apply equally well to an IO as an IE, or so it appears.  Speaking of observations! (Prettified observations)I am starting to see why looking at types is so satisfying! A pure structural/typelevel examination of semantics is very fascinating. Looking at currying, lists and tuples in this light will be enlightening.

From http://www.haskell.org/onlinereport/list.html
17.4 unfoldr
The unfoldr function is a "dual" to foldr: while foldr reduces a list to a summary value, unfoldr builds a list from a seed value. For example:
iterate f == unfoldr (\x > Just (x, f x))
In some cases, unfoldr can undo a foldr operation:
unfoldr f' (foldr f z xs) == xs
if the following holds:
f' (f x y) = Just (x,y)
f' z = Nothing 
Or other special guests

Talking about unfolds, someone from #haskell once wrote a beautiful implementation using a GHC extension here on hpaste.

concat :: [[a]] > [a]  generalized: join :: Monad m => m (m a) > m a { if you sub m=[], you'll get: join :: [] ([] a) > [] a = join :: [[a]] > [a] :) }  dual of join: duplicate :: Comonad w => w a > w (w a)  but I'm not well versed in the area of comonads :p

This is not meant to knock anyone.
I'm sorry, but this constant referring back to C# and Linq might be a "sound strategy" from a Microsoft Product perspective, but it is doing nothing for me in terms of understanding either FP or Haskell, the purported objective of these lecture.
I can see that people who already are experts in the area are very excited about this. Perhaps they were the intended audience, though I don't understand the need for preaching to the choir... but whatever. Congratulations to all of you. As the expert on being a "n00b" .. I can tell you these lectures are doing absolutely ZERO for me. FWIW.
I'm going back to the Univ. of Aachen lectures on Haskell programming because they are catered towards a less expert audience and don't concentrate so much on how Haskell compares to other technologies (C# or F# or Ruby or Pascal or whatever) so heavily.
For anyone else who may be (a noob and still) interested, the lectures are here: http://video.sinf.de/#FP.2005SSGiesl.%28COt%29.HD_Videoaufzeichnung
I hope I'll gain enough knowledge to apply Haskell to some of the stuff I need to do in real life, and maybe then I'll be able to understand this lecture series which, truth be told, should be titled "A Comparative study of Haskell and C# for .Net & Haskell Experts".

You seem to be a troll so it's hard to answer in a cooperative spirit. Moving on ...
I'd say these lectures are geared towards people who know C#, VB.Net, Java,etc. Mapping back to C# helps ground the concepts. Haskell is like alien technology. It can make things a lot clearer (and more relevant) to correlate it with something you already understand. That's kind of true when learning anything new in fact ...
In any case you can always buy the book at a discount and learn Haskell from the ground up. That's what it was written for.

That is excellent! It is like an "iterated function system", perfect for things like pseudorandom numbers, cellular automata, Lsystems and stuff like that
The (two) books arrived recently. One of them will be a christmas gift. My little brother is going to learn to love FP, whether he likes it or not!

I totally disagree, since I have been able to easily go back to C# and try to write all the Haskell stuff demonstrated in C# in some way. I have learned a lot already from it and it's not only useful for Haskell, since I have already applied some of it to my C# programs.
EDIT: I tried to post the code but the formatting is screwy and I keep getting errors in IE when I try to fix it. Oh well.

I went over the top and created a fb group for this lecture series (and allowed myself to call it the Lambda Lecture Series; and will keep "The Wall" clean with just lecture references until the series is done).
Later on it would be great to hear your take on the Expression Problem, although you've already alluded to it. I was first made aware of it by looking at some example code in Scala. There also appears to be a Haskell solution to this. Well, let's wait till you have taught us more about monads.

Great post, again. Looking forward to the next one in the series!

Is time an effect and should typesystems track it?

Well, ask yourself this: is time always an effect? For example, does the result of sqrt(9) depend on the current time?
But when time does have an effect on the result, the type system should track it. For example, getCurrentTime in Haskell has type IO UTCTime.

If it takes infinite time, then it's a clear effect. If it takes t time, then you could say that the result is t time in the future, rather than tn time in the future  in a manner of speaking it's living in a different world.
I'm thinking in terms of the environment, time is like state transitions and every state transition is an effect in the world. Of course there's no direct relation between a pure functions definition and how many state transitions it might take to finish, given a multitude of compiler rewritings, hmm...
There is also the matter of dealing with complexity as a tool to know when to parallelize. If a function could be automatically complexityanalyzed then for every function you would know how expensive it was. This would allow you to also express in the typesystem a higherorder function accepting another function which must have a complexity lower than a certain bound: the compiler and typechecker would guarantee this to be true. Also you might define functions with timeguarantees, that forcefully stop prematurely if they cannot complete within some time; or never "start" in the first time, given the former example.
For DateTime.Now it is clear that the function is not pure, it takes nothing and returns something different every time  there it is clear that the result is time itself and time never stops changing  it's predictably impure. Erik mentioned modelling time as an IObservable where you can sample it a couple of times; I'm not sure that's really truly pure as the IO, when you subscribe to it, will return different results every time  which is intended of course; but that's kind of hard to get around.
The question is: for infinite time, you have bottom, i.e. nontermination. But how much time is bottom? Is it always sufficient to say infinite time or wouldn't it make sense to define more soft times?

"Also you might define functions with timeguarantees, that forcefully stop prematurely if they cannot complete within some time;"
Hmm, that's not an easy task. You'd need to encode values as types or you need type constructors that also accept values instead of only types. For example, you'd love to write code like this then:
 an Int to Int function that takes at most 5 time units someComputation :: Int > RealTime 5 Int
But now you might also want to do arithmetic at the type level:
 an example, the type of a function that adds two realtime int results: add :: RealTime t1 Int > RealTime t2 Int > RealTime (t1+t2) Int
I think this is getting quite tricky real fast.
(There is a nice way however to pass values around at the type level, see this paper: Functional Pearl: Implicit Configurations —or, Type Classes Reflect the Values of Types)
"The question is: for infinite time, you have bottom, i.e. nontermination. But how much time is bottom? Is it always sufficient to say infinite time or wouldn't it make sense to define more soft times?"
Bottom is the result of anything that fails to produce an answer. I wouldn't overload bottom for computations that do not finish in time, but use the type system instead to tag your values:
 pseudoHaskell (no support for values at the type level) data RealTime <Int> a = Result a  the result of a successful RealTime computation  Aborted  the result of a computation that took too long
Edit:
Just found this, might also be an interesting read: Pausable IO actions for better GUI responsiveness.

Interesting ShinNoNoir. Actually, I've scribbled up a few loose thoughts about this but in the next post I'll maybe write one of the combinators I've been thinking about: namely the computation preemption' combinator. So this would spawn a thread (or maybe task) and synchronously wait for that task to complete until some time has passed whereafter it will timeout. This makes timeout or hard limits a pervasive concept in functional programming, if you will. One may say the implementation will be very ugly  depends on OS and .Net primitives  but the abstract idea is quite cool.
There are many aspects to time and computation and this is probably something where someone as Beckman could talk for hours.
I'll look into the pausable IO actions  looks good  but threads provide a brutal alternative: they are not sensitive to the granularity of a task, hopefully the OS will ensure that a thread will be interrupted after some time t.
As for⊥: well the idea is not to misuse bottom but merely to point out that there is a continuum (likely not infinite) between This Instant (!) and Not Ever (⊥). There is space here to be explored, I find that quite fascinating. People talk about the elephant in the room in relation to parallelism; well time is everywhere and it's not mentioned in any type  talk about a taboo topic!'
' This is probably the wrong name, as I now understand preemption to actually mean mature or natural completion of a task ahead of schedule, not forced preemption but I don't know the precise term to use for this concept
''

The Haskell page for ⊥reveals something interesting!
Bottom is a member of any type, even the trivial type () or the equivalent simple type:
data Unary = Unary
If it were not, the compiler could solve the halting problem and statically determine whether any computation terminated (though note that some languages with dependent type systems, such as Epigram, can statically enforce termination, based on the type for particular programs, such as those using induction).The important part!
...note that some languages with dependent type systems, such as Epigram, can statically enforce termination, based on the type for particular programs, such as those using induction...
Let's look at Epigram
Epigram is the name of a functional programming language with dependent types and of the IDE usually packaged with it. Epigram's type system is strong enough to express program specifications. The goal is to support a smooth transition from ordinary programming to integrated programs and proofs whose correctness can be checked and certified by the compiler. Epigram exploits the propositions as types principle, and is based on intuitionistic type theory.  Wikipedia
Time to look into that too...
So, Epigram is dependently typed and has a 2dimensional syntax!  Another reason why code editors need to evolve.
It appears that the most popular upinthesky proof assistants that are often mentioned on LtU don't exploit the same kind of dependent typing as seen in Epigram (1 or 2). Looks like it's quite novel... cue Eric on typesystems of the future!

Here is my take on the homework:
1. Without looking at the definitions from the standard prelude, define the following library functions using recursion.
 Decide if all logical values in a list are True:
and' :: [Bool] > Bool and' [] = True and' (False:_) = False and' (_:xs) = and xs
 Concatenate a list of lists:
concat' :: [[a]] > [a] concat' [] = [] concat' (xs:xss) = xs ++ concat' xss
 Produce a list with n identical elements:
replicate' :: Int > a > [a] replicate' 0 x = [] replicate' (n+1) x = x:replicate' n x
 Select the nth element of a list
(!!@) :: [a] > Int > a (!!@) (x:_) 0 = x (!!@) (_:xs) (n+1) = xs !!@ n
 Decide if a value is an element of a list:
elem' :: (Eq a) => a > [a] > Bool elem' _ [] = False elem' x (y:ys)  x == y = True  otherwise = elem' x ys
2. Define a recursive function merge:: Ord a => [a] > [a] > [a] that merges two sorted lists to give a single sorted list.
merge:: (Ord a) => [a] > [a] > [a] merge [] [] = [] merge xs [] = xs merge [] ys = ys merge (x:xs) (y:ys)  x < y = x:merge xs (y:ys)  x == y = x:y:merge xs ys  otherwise = y:merge (x:xs) ys
3. Define a recursive function msort :: Ord a => [a] > [a] that implements merge sort
msort :: (Ord a) => [a] > [a] msort [] = [] msort [x] = [x] msort xs = merge (msort ys) (msort zs) where [(ys,zs)] = halve xs halve :: [a] > [([a],[a])] halve [] = [] halve xs = [(take n xs, drop n xs)] where n = (length xs `div` 2)

Did anyone else do the first homework exercise (pythagoreans) and notice the following pattern:
> pyths 5 [(3,4,5),(4,3,5)] > pyths 55 [(33,44,55),(44,33,55)] > pyths 555 [(171,528,555),(180,525,555),(312,459,555),(333,444,555),(444,333,555),...]
Freaky!

Why freaky? (k*3)^2 + (k*4)^2 = (k*5)^2.

My (sloppy) attempt at homework:
pyths hyp = [ (c1, c2)  c1 < [1..hyp1], c2 < [1..hyp1], c1*c1 + c2*c2 == hyp*hyp ] perfects n = [ x  x < [1..n], x == sum (f x) ] where f n = [ x  x < [1..n1], n `mod` x == 0 ] scalarProd xs ys = sum [ x*y  (x, ix) < zip xs [1..], (y, iy) < zip ys [1..], ix == iy] scalarProd_map xs ys = sum (map (\(x, y) > x * y) (zip xs ys))

Hey, late as usual
module Main( main, Main.sum, Main.zip ) where
main = do
print $ pyths 5
print $ perfects 500
 print $ Main.zip [4,5,6,7] [1,2,3]
 print $ Prelude.zip [4,5,6,7] [1,2,3]
 print $ Main.zip [1,2,3] [4,5,6,7]
 print $ Prelude.zip [1,2,3] [4,5,6,7]
pyths :: Int > [(Int, Int, Int)]
pyths = \n > [(x, y, n)  x < [1..n], y < [1..n], (x*x + y*y) == n*n]
perfects :: Int > [Int]
perfects n = [x  x < [1..n], Main.sum (factors x) == x]
factors :: Int > [Int]
factors = \n > [x  x < [1..n], x < n && n `mod` x == 0]
sum :: [Int] > Intsum [] = 0
sum (a:as) = a + Main.sum as
zip :: [Int] > [Int] > [(Int, Int)]
zip as bs = dropper (pairs as bs) (length bs)
pairs :: [Int] > [Int] > [(Int, Int)]
pairs as bs = [(x,y)  x < as, y < bs]
dropper :: [(Int,Int)] > Int > [(Int,Int)]
dropper [] _ = []
dropper (a:as) n = [a] ++ dropper (drop n as) nSohail Qayum Malik

factors :: Int > Int
factors n = [x  x < [1..n], n `mod` x == 0]
pyths :: Int > [(Int,Int,Int)]
pyths n = [(x,y,z)  x < [1..n], y < [x..n], z < [y..n], x^2 + y^2 == z^2]
perfects :: Int > [Int]
perfects n = [x  x < [1..n], sum (factors x)  x == x]
scal :: Num a => [a] > [a] > [a]
scal xs ys = zipWith (*) xs ys