On one hand, a composite integer cannot possess a factor greater than its square root.On the other hand, since the list we're looking through contains all possible prime numbers, we are guaranteed to find a factor or an exact match eventually, so do we need the
Throwing this over to somebody with a bigger brain than me...
MathematicalOrchid 16:41, 5 February 2007 (UTC)
a composite can indeed have factors greater than its square root, and indeed most do. what you mean is that a composite will definitely have at least one factor smaller-equal than its square root.why not use
LOL! That is indeed what I meant.It turns out my comment above is correct - the
MathematicalOrchid 10:17, 6 February 2007 (UTC)
The section Simple Prime Sieve II is not a sieve in the same sense that the first one is. It really implements a primality test as a filter.
A more "sieve-like" version of the simple sieve which exploits the fact that we need not check for primes larger than the square root would be
primes = sieve [2..]
where sieve (p:xs) = p : sieve [x | x<-xs, (x< p*p) || (x `mod` p /= 0)]
However, this runs even slower than the original!
Kapil Hari Paranjape 06:51, 4 February 2009 (UTC)
I want to thank Leon P. Smith for showing me the idea of producing the spans of odds directly, for version IV. I had a combination of span and infinite odds list, as in span (< p*p) [3,5..] etc. That sped it up some 20% more, when GHC-compiled.
The mark-and-comb version that I put under Simple Sieve of Eratosthenes seems to me very "faithful" to the original (IYKWIM). Strangely it shows exactly same asymptotic behavior when GHC-compiled (tested inside GHCi) as IV. Does this prove that priority queue-based code is better than the original? :)
BTW "unzip" is somehow screwed up inside "haskell" block, I don't know how to fix that.
I've also added the postponed-filters version to the first sieve code to show that the squares optimization does matter and gives huge efficiency advantage just by itself. The odds only trick gives it a dozen or two percent improvement, but it's nothing compared to this 20x massive speedup!
Written in list-comprehension style, it's
primes = 2: 3: sieve (tail primes) [5,7..] where
sieve (p:ps) xs
= h ++ sieve ps [x|x<-t, x `rem` p /= 0]
where (h,(_:t))=span (< p*p) xs
Which BTW is faster than the IV version itself, when interpreted in GHCi. So what are we comparing here, code versions or Haskell implementations??
WillNess 10:46, 15 November 2009 (UTC)
I've added the code for Euler's sieve which is just the postponed filters with minimal modification, substituting
(t `minus` prime_multiples) for
(filter not_divisible t).
Now it is obvious that
(...(((s - a) - b) - c) - ...) is the same as
(s - (a + b + c + ...)) and this is the next code, the "merged multiples" Euler's sieve.
It is very reminiscent of the famous Richard Bird's code (appearing also in Melissa O'Neill's JFP article) but its copyright status is unknown to me so I couldn't reference it in the main article body. The code as written in the article has the wrong clause order in 'merge', and uses 'minus' instead of more efficient 'gaps'.
WillNess 17:06, 5 December 2009 (UTC)