Intro
I guess that, Y-Combinator (maybe together with Monad) is now on the Geek-Should-Know concept list. Also YCombinator is a name of a venture capital firm so even managers may know about it!
I intend this to be a very simple intro to Y-Combinator using Groovy Closures. There are other blogs about Y-Combinator in Groovy:
I intend this to be a very simple intro to Y-Combinator using Groovy Closures. There are other blogs about Y-Combinator in Groovy:
My goal is to explain/derive Y-combinator, not to just show how to use it.
An excellent post with code examples in Scheme can be found here:
http://mvanier.livejournal.com/2897.html
http://mvanier.livejournal.com/2897.html
Combinators and Things: The History
"Computer Scientists do not know their history" Eric Meijer
My background (education) is mathematics, not CS. As a mathematician, I was never much into logic. I am learning this stuff now. I believe CS (maybe this is changing) education does not typically include topics like Combinatory Logic, Lambda Calculus or Category/Functor Theory.
Universities produce an army of excellent engineers which think in only imperative terms.
(Correct me if I am wrong, I am sure this is very much school dependent.)
My background (education) is mathematics, not CS. As a mathematician, I was never much into logic. I am learning this stuff now. I believe CS (maybe this is changing) education does not typically include topics like Combinatory Logic, Lambda Calculus or Category/Functor Theory.
Universities produce an army of excellent engineers which think in only imperative terms.
(Correct me if I am wrong, I am sure this is very much school dependent.)
SKI Calculus and combinators (1920-ties, Moses Schönfinkel, Haskell Curry) are older than Turing Machines (1930-ties). Lambda Calculus (1930-ties Alonzo Church) is concurrent to Turing work. Lots of this is very cool stuff and includes work of many other mathematicians, most notably, David Hilbert.
SKI calculus motivation was logic, not computing, but, in fact, Church's work was largely motivated by the idea of a computer (same as Turing work was). You can think that lambda calculus was `software oriented` thinking, why Turning machines have been `hardware oriented`.
That last statement is somewhat questionable: since two base combinators S, K (I can be derived) can express complete mathematical logic, it maybe possible to build CPU with S, K, I as the instruction set. That idea has been worked on a lot (however, it is hard to google for it: "SKI hardware", "SKI instructions set" ;).
The is some learning involved. Here are some 'quick' guides I have recently found:
Enough of this into. (I am not sure it even belongs in this post).
Recursion as Fix-point
Functions are often defined using recursion. In this post, I will use Fibonacci sequence with its (super-slow) recursive definition which may look like this in Groovy:
Closure fib
fib = {int n -> n < 2 ? n: fib(n-1) + fib(n-2)}
but everything discussed here will generalize to other recursive function definitions.
Here is a different (and interesting) way to look at this. We can create a functional FIB (a function that takes functions are argument and returns a new function) like so:
Closure FIB = {Closure f ->
{int n -> n<2 ? n: f(n-1) + f(n-2)} //returns new closure
}
Closure FIB = FpigBase.f {Closure f, int n ->
n<2 ? n: f(n-1) + f(n-2)
}
Our FIB functional maps functions into functions. The fix-point (if exists and here it does) is a function f0 such that:
FIB(f0) == f0
It should be obvious that this f0 fix-point must be the same as the fib function defined above and this type of argument can be easily generalized to other recursive definitions.
So the name of the game here is: to find a way to 'solve' functionals like the FIB functional defined above for fix-points.
Fix function:
Would it be cool if we had a 'magic' function called fix which could calculate f0 like so:
Obviously, Haskell has one ;) and it is defined like this:
f0 = fix FIB
fix :: (a -> a) -> a fix f = f (fix f)
This is exactly what we need, if you blindly plugin 'FIB' for 'f' and 'f0' for 'fix FIB' the function implementation line becomes:
f0 = FIB f0
so if fix returns anything it will he a fix-point!
fib' :: (Integer -> Integer) -> Integer -> Integer fib' f 0 = 0 fib' f 1 = 1 fib' f n = f (n-1) + f (n-2) f0 :: Integer -> Integer f0 = fix fib' numbs = map f0 [0..6] --numbs equals to [0,1,1,2,3,5,8]
Let us do that in Groovy! There is a little glitch. Haskell is lazy language so the above implementation of fix does not have to stack overflow, this simple Groovy equivalent will always stack overflow:
but not if you do it like this:
you can run the following to convince yourself that it works:
The trick is that fix function passes closure {x -> (fix(f))(x)} to the next iteration of FIB and that closure does not need to be invoked if n < 2.
This is cool but the definition of `fix` itself is recursive. So we are swapping one recursion for another. I would be better if we could do recursion without ANY recursion!
def fix fix = {Closure f -> {x -> (f << fix(f)) x} }
but not if you do it like this:
def fix fix = {Closure f -> f({x -> (fix(f))(x)}) }
you can run the following to convince yourself that it works:
def f0 = fix(FIB)
assert [0,1,1,2,3,5,8] == (0..6).collect(f0)
assert 8 == f0(6)
The trick is that fix function passes closure {x -> (fix(f))(x)} to the next iteration of FIB and that closure does not need to be invoked if n < 2.
This is cool but the definition of `fix` itself is recursive. So we are swapping one recursion for another. I would be better if we could do recursion without ANY recursion!
Y-Combinator
Y combinator is, well is a combinator (whatever that means) invented by Haskell Curry.
It it is typically represented by the following lambda expression (whatever that means):
λf.(λx.f (x x)) (λx.f (x x))
Instead of explaining its meaning, let me try to derive it (using Groovy) and the meaning may become clear. Forget about fix function. I would really like to do something like this:
FIB(FIB)(6) //or FIB(FIB, 6)
This will not work with current definition of FIB but will work if you define it like so:
Closure FIB = {Closure f, int n -> n < 2 ? n: f(f, n-1) + f(f, n-2) } FIB(FIB, 6) //returns 8
or using curried versions:
Closure FIB = {Closure f -> {int n -> n < 2 ? n: f(f)(n-1) + f(f)(n-2) } } assert 8 == FIB(FIB)(6)
This is it, this is the main trick behind Y-Combinator. The rest is simply bunch of refactoring steps.
This trick may look like cheating and to some extent it is. Instead of being able to call function from itself (which would be recursion), we pass a copy of that function to itself so that function can call it. On one hand this seems like a recursion in disguise, on the other hand it is function composition. On some deep level, composing function with itself and recursion are similar concepts.
I like to think that function composition is a stronger concept than recursion. Y verifies that claim.
This trick may look like cheating and to some extent it is. Instead of being able to call function from itself (which would be recursion), we pass a copy of that function to itself so that function can call it. On one hand this seems like a recursion in disguise, on the other hand it is function composition. On some deep level, composing function with itself and recursion are similar concepts.
I like to think that function composition is a stronger concept than recursion. Y verifies that claim.
I can refactor my code so it looks like this:
Closure c2 = {f -> return {x -> f(f)(x)} } Closure FIB= {Closure f2 -> return {int n -> n < 2 ? n: f2(n-1) + f2(n-2)} } c2({Closure f -> return FIB(c2(f)) }) (6) //returns 8
and now I can re-write my code like so:
Closure c2 = {f -> {x -> f(f)(x)} } Closure FIB = {Closure f -> {int n -> n < 2 ? n: f(n-1) + f(n-2) } } def Y = {Closure fx -> c2({Closure f -> fx(c2(f)) })} //fx represents functional, f function def f0 = Y(FIB) assert [0,1,1,2,3,5,8] == (0..6).collect(f0)
and this is it, we have Y Combinator in Groovy. And if you look long enough you will see the original lamba expression!
Actually what we got is this (Groovy is strict/eager evaluation language):
λf.(λx.f (λv.((x x) v))) (λx.f (λv.((x x) v)))
Formal Mathematical Proof
It is actually not very hard. This PDF: http://www.inf.fu-berlin.de/lehre/WS03/alpi/lambda.pdf
has a good introduction to lambda calculus, but the proof that Y acts as fix-point finder requires more explanation:
has a good introduction to lambda calculus, but the proof that Y acts as fix-point finder requires more explanation:
Y g = (λf . (λx . f (x x)) (λx . f (x x))) g //1. definition
= (λx . g (x x)) (λx . g (x x)) //2. β-reduction of λf: applied to g
= g ((λx . g (x x)) (λx . g (x x))) //3. β-reduction of λx done on x x
= g (Y g) //4. definition again
The only thing that needs explanation is how do we get from 2 to 3. We basically are substituting
(λx . g (x x)) into x x expression on the left!
Useless or useful?
If you visited Hamlet D'Arcy's post on Y-combinator you may have noted that he has classified Y-combinator as cool but useless.
I think, the main applicability of Y-combinator is that it lets you keep your closures as anonymous functions. You can define recursive closure without giving it a name like this:
Y {Closure f ->
{ ... }
}
I think that could make some of your code more expressive. Also if pointless programming is important to you, then you should love Y! Now, pointless and useless are not exactly the same ;).
There is also the thing about recursion being logically dangerous. You can lookup 'Curry Paradox' and read on that. But hopefully compilers will catch all of that for us, and I do not think this is related to Groovy programming anyway.
So is this just cool theoretical CS? My take on this is that, for working programmers, it maybe more important to know what combinators are in general and try to learn to 'think in combinators'. In that Y-combinator probably will not help much. 'Thinking in combinators' will involve higher level combinators like function sets defining Monad, Arrows, Lenses, etc.