So I was working on a way to lazily generate primes, and I came up with these three definitions, which all work in an equivalent way - just checking whether each new integer has a factor among all the preceding primes:
primes1 :: [Integer]
primes1 = mkPrimes id [2..]
  where mkPrimes f (x:xs) = 
          if f (const True) x 
          then 
            let g h y = y `mod` x > 0 && h y in
            x : mkPrimes (f . g) xs
          else
            mkPrimes f xs
primes2 :: [Integer]
primes2 = mkPrimes id (const True) [2..]
  where mkPrimes f f_ (x:xs) = 
          if f_ x 
          then 
            let g h y = y `mod` x > 0 && h y in
            x : mkPrimes (f . g) ( f $ g $ const True) xs
          else
            mkPrimes f f_ xs
primes3 :: [Integer]
primes3 = mkPrimes [] [2..]
  where mkPrimes ps (x:xs) = 
          if all (\p -> x `mod` p > 0) ps
          then 
            x : mkPrimes (ps ++ [x]) xs
          else
            mkPrimes ps xs
So it seems to me primes2 should be a little faster than primes1, since it avoids recomputing
f_ = f (const True) for every integer (which I think requires work on the order of the number
of primes we've found thus far), and only updates it when we encounter a new prime.
Just from unscientific tests (running take 1000 in ghci) it seems like primes3 runs faster
than primes2.
Should I take a lesson from this, and assume that if I can represent a function as an operation on an array, that I should implement it in the latter manner for efficiency, or is there something else going on here?
 
     
     
    