Haskell programs will sometimes consume a lot more memory than necessary, and this is often due to too much, or too little, laziness.
One common culprit is Haskell's [Char]-based IO. For example, if you are reading a file lazily and storing information into lazy data structure, the data structure will often keep unevaluated pointers to the list of characters. This will potentially keep the whole file in memory as a very expensive linked list of characters.
One solution is to use Data.ByteString to reduce the cost per character. This is often effective, but often not feasible or straightforward if your code depends on third party libraries, like XML-parsers or Parsec.
A better solution is to consider if you can avoid the problem by more careful evaluation.
Heap profiling is an important tool in combating excessive memory usage, and is available (at least) in GHC and NHC.
For GHC, compile your program for profiling, using the '-prof -auto-all' flags. Then run your program, adding +RTS -h to the command line. This will result in a file with the same name as your program, but with an additional .hp suffix.
Use the 'hp2ps' program to turn this into a nice graph of the heap in Postscript format, suitable for printing or viewing with e.g. 'gv'. The graph should help you pinpoint the culprit.
See eg GHC User Guide: Chapter 6. Profiling for further options.