You're right that delimited continuations can be expressed using undelimited continuations. So the definitions of shiftT and resetT can be always described using just ContT. But:
- Delimited continuations are less powerful. This makes them easier to implement and also reason about for humans. (See also a lot of other interesting posts about continuations from Oleg Kiselyov).
- Using the familiar notation of shift/reset makes it easier to understand, especially for those familiar with the concept.
Essentially, continuations allow to turn a program inside out: The block delimited by reset is squeezed inside the inner part of the program, when shift calls the passed function. (In the case of undelimited continuations the whole execution context is squeezed inside, which is what makes them so weird.)
Let's make a few examples:
import Data.List
import Control.Monad
import Control.Monad.Trans
import Control.Monad.Trans.Cont
test0 :: Integer
test0 = evalCont . reset $ do
return 0
If we have reset without shift, it's just a pure computation, nothing fancy. The above function simply returns 0.
Now lets use both of them:
test1 :: Integer
test1 = evalCont . reset $ do
r <- shift $ \esc -> do
let x = esc 2
y = esc 3
return $ x * y
return $ 1 + r
This becomes more interesting.
The code between shift and reset is actually squeezed into the calls of esc, in this simple example it's just return $ 1 + r. When we invoke esc, the whole computation is performed and its result becomes the result of the esc call. We do this twice, so essentially we invoke everything between shift and reset twice. And the result of the whole computation is result $ x * y, the result of the shift call.
So in a sense, the shift block becomes the outer part of the computation and the block between reset and shift becomes the inner part of the computation.
So far so good. But it becomes even more daunting if we invoke shift twice, like in this code sample:
list2 :: [(Int, String)]
list2 = evalCont . reset $ do
x <- shift $ \yieldx ->
return $ concatMap yieldx [1, 2, 3]
y <- shift $ \yieldy ->
return $ concatMap yieldy ["a", "b", "c"]
return [(x, y)]
And here is what it produces (hidden for those who want to try to figure it out as an exercise):
[(1,"a"),(1,"b"),(1,"c"),(2,"a"),(2,"b"),(2,"c"),(3,"a"),(3,"b"),(3,"c")]
Now what happens is that the program is turned inside out twice:
- First everything outside the
x <- shift ... block is bound to the yieldx call, including the next shift. And the result of the computation is the result of the x <- shift ... block.
- Second, when invoking the second
y <- shift ... inside yieldx, again the rest of the computation is bound to the yieldy call. And the result of this inner computation is the result of the y <- shift ... block.
So in x <- shift we run the rest of the computation for each of the three arguments, and during each of them, we do a similar thing for each of the other three arguments. The result is then the Cartesian product of the two lists, as we essentially performed two nested loops.
The same thing applies to shiftT and resetT, just with added side effects. For example, if we want to debug what is actually happening, we can run the above code in the IO monad and print debugging statements:
list2' :: IO [(Int, String)]
list2' = evalContT . resetT $ do
x <- shiftT $ \yield ->
lift . liftM concat . mapM (\n -> print n >> yield n) $ [1, 2, 3]
y <- shiftT $ \yield ->
lift . liftM concat . mapM (\n -> print n >> yield n) $ ["a", "b", "c"]
return [(x, y)]