Difference between revisions of "Talk:Questions and answers"

From HaskellWiki
Jump to navigation Jump to search
(...and now the code snippet actually makes sense!)
 
(7 intermediate revisions by 4 users not shown)
Line 1: Line 1:
  +
== IRC ==
1 - When there will be a standard reliable distribution of 'Haskell'?
 
  +
Say GHC? See into documentations and then pay attention to top of the page :
 
  +
Is this a good place to ask questions? [[User:MathematicalOrchid|MathematicalOrchid]] 15:30, 18 January 2007 (UTC)
'experimental'. As you see everything is too academic to be real.
 
  +
Haskell peaople insist that this is a language of choice. I admit that I fell
 
  +
:not yet at least. try the #haskell irc channel on freenode which is usually manned by at least a few very helpfull people. alternatively try the [http://haskell.org/pipermail/haskell-cafe/ haskell-cafe] mailing list --[[User:JohannesAhlmann|Johannes Ahlmann]] 12:29, 22 January 2007 (UTC)
badly in love! But then I found a girl of the church that does not want to
 
  +
dance with me and come into my dreams. She is so religious that I think
 
  +
== A question of speed ==
I am trying to communicate with a ghoust or wourth : a monad.
 
  +
2 - Why Haskell Peaple love to 'EXPLAIN' and hate to show 'HOW'?
 
  +
I have just performed a benchmark regarding the speed of <hask>(++)</hask> vs <hask>putStr</hask>, and received an extremely puzzling and counter-intuitive result. Perhaps somebody can explain?
I am a programmer
 
  +
and I am programming as my job. There are a lot of documents there that none
 
  +
=== Test #1 ===
of them has a sample that put it's concept to work?
 
  +
So why you hate samples? I can not code in concepts and sell them and that
 
  +
<haskell>
is most important thing.
 
  +
writeFile "Test1.txt" $ concat $ replicate n "test"
3 - This Monadic plague is driving me crazy! I read documents about it and still
 
  +
</haskell>
I do not understand it (I admit I am a little slow but this is far from being
 
  +
grasped as daily concerns in other programming languages.)
 
  +
For n = 10,000,000, that takes about 35 seconds wall-clock time and about 17 seconds CPU time on my test machine. (It also uses about 1.4 MB RAM.)
So PLEASE somebody put an PRACTICAL end to this ambious thing that bleeds
 
  +
from inner part of my mind. (I pray !!! God bless you!)
 
  +
=== Test #2 ===
And do not thing we do not need states at all even an electronic cicuit with
 
  +
solid behaviour has some 'unnatural' - in your oppinion - states like noises
 
  +
<haskell>
and strange feedbacks.
 
  +
writeFile "Test2.txt" $ concatMap (\x -> "test") [1..n]
4 - If this is going to be a practical 'THING' in 'INDUSTRY' maybe the other ways
 
  +
</haskell>
need to be tested. A port of appache http server to haskell, extending
 
  +
haskell to paralell and distributed space more reliably, better IDEs and
 
  +
For the same value of n, that takes about 43 seconds wall-clock time and about 20 seconds CPU time. (Uses 2.4 MB RAM for some reason...)
tools, and a little roughness for comming out of the paradise of academia to
 
  +
the warm climate of daily problem solving challenges.
 
  +
=== Test #3 ===
Thanks all for your efforts
 
  +
  +
<haskell>
  +
writeFile "Test3.txt" $ build n ""
  +
  +
build 0 x = x
  +
build n x = build (n-1) (x ++ "test")
  +
</haskell>
  +
  +
This test does not finish. It simply consumes memory without limit, never writing anything to disk. (At 90 MB, Windoze warned me that 'the system is getting dangerously low on virtual memory'.)
  +
  +
I added a couple of calls to <hask>seq</hask> in there - but this made no noticeable difference to anything.
  +
  +
=== Test #4 ===
  +
  +
And now the really interesting test:
  +
  +
<haskell>
  +
do h <- openFile "Test4.txt" WriteMode ; mapM_ (\x -> hPutStr h "help") [1..n]
  +
</haskell>
  +
  +
This takes 80 seconds wall-time and 70 seconds CPU time. (Memory usage appears to be 2.4 MB or less.)
  +
  +
=== Summary ===
  +
  +
I'm supprised that <hask>concatMap</hask> should be slower than <hask>concat</hask> and <hask>replicate</hask>. But then we're not talking about a huge speed difference.
  +
  +
I am absolutely astonished that <hask>(++)</hask> should be ''50% faster'' than the much more efficient I/O calls. Does anybody have the slightest clue how this can be? Currently the only think I can think of is that each I/O call has some kind of constant overhead, and so the ''number'' of I/O calls affects speed more than the amount of data processed. But even so... 50%??
  +
  +
All this is with code compiled by GHC 6.6 - as if that makes any difference.

Latest revision as of 13:23, 5 February 2007

IRC

Is this a good place to ask questions? MathematicalOrchid 15:30, 18 January 2007 (UTC)

not yet at least. try the #haskell irc channel on freenode which is usually manned by at least a few very helpfull people. alternatively try the haskell-cafe mailing list --Johannes Ahlmann 12:29, 22 January 2007 (UTC)

A question of speed

I have just performed a benchmark regarding the speed of (++) vs putStr, and received an extremely puzzling and counter-intuitive result. Perhaps somebody can explain?

Test #1

  writeFile "Test1.txt" $ concat $ replicate n "test"

For n = 10,000,000, that takes about 35 seconds wall-clock time and about 17 seconds CPU time on my test machine. (It also uses about 1.4 MB RAM.)

Test #2

  writeFile "Test2.txt" $ concatMap (\x -> "test") [1..n]

For the same value of n, that takes about 43 seconds wall-clock time and about 20 seconds CPU time. (Uses 2.4 MB RAM for some reason...)

Test #3

  writeFile "Test3.txt" $ build n ""

  build 0 x = x
  build n x = build (n-1) (x ++ "test")

This test does not finish. It simply consumes memory without limit, never writing anything to disk. (At 90 MB, Windoze warned me that 'the system is getting dangerously low on virtual memory'.)

I added a couple of calls to seq in there - but this made no noticeable difference to anything.

Test #4

And now the really interesting test:

  do h <- openFile "Test4.txt" WriteMode ; mapM_ (\x -> hPutStr h "help") [1..n]

This takes 80 seconds wall-time and 70 seconds CPU time. (Memory usage appears to be 2.4 MB or less.)

Summary

I'm supprised that concatMap should be slower than concat and replicate. But then we're not talking about a huge speed difference.

I am absolutely astonished that (++) should be 50% faster than the much more efficient I/O calls. Does anybody have the slightest clue how this can be? Currently the only think I can think of is that each I/O call has some kind of constant overhead, and so the number of I/O calls affects speed more than the amount of data processed. But even so... 50%??

All this is with code compiled by GHC 6.6 - as if that makes any difference.