Arrays

From HaskellWiki
Revision as of 06:09, 23 July 2020 by Ysangkok (talk | contribs) (fix link to runtime options in users guide)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Haskell'98 supports just one array constructor type, namely Array, which gives you immutable boxed arrays. "Immutable" means that these arrays, like any other pure functional data structure, have contents fixed at construction time. You can't modify them, only query. There are "modification" operations, but they just return new arrays and don't modify the original one. This makes it possible to use Arrays in pure functional code along with lists. "Boxed" means that array elements are just ordinary Haskell (lazy) values, which are evaluated on demand, and can even contain bottom (undefined) values. You can learn how to use these arrays at http://haskell.org/tutorial/arrays.html and I'd recommend that you read this before proceeding to the rest of this page

Nowadays the main Haskell compilers, GHC and Hugs, ship with the same set of Hierarchical Libraries, and these libraries contain a new implementation of arrays which is backward compatible with the Haskell'98 one, but which has far more features. Suffice it to say that these libraries support 9 types of array constructors: Array, UArray, IOArray, IOUArray, STArray, STUArray, DiffArray, DiffUArray and StorableArray. Each provides just one of two interfaces, and one of these you already know.

Quick reference

Immutable
instance IArray a e
IO monad
instance MArray a e IO
ST monad
instance MArray a e ST
Standard Array
DiffArray
IOArray STArray
Unboxed UArray
DiffUArray
IOUArray
StorableArray
STUArray

Immutable arrays (module Data.Array.IArray)

The first interface provided by the new array library, is defined by the typeclass IArray (which stands for "immutable array" and defined in the module Data.Array.IArray) and defines the same operations that were defined for Array in Haskell'98. The big difference is that it is now a typeclass and there are 4 array type constructors, each of which implements this interface: Array, UArray, DiffArray, and DiffUArray. We will later describe the differences between them and the cases when these other types are preferable to use instead of the good old Array. Also note that to use Array type constructor together with other new array types, you need to import Data.Array.IArray module instead of Data.Array


Mutable IO arrays (module Data.Array.IO)

The second interface is defined by the type class MArray (which stands for "mutable array" and is defined in the module Data.Array.MArray) and contains operations to update array elements in-place. Mutable arrays are very similar to IORefs, only they contain multiple values. Type constructors for mutable arrays are IOArray and IOUArray and operations which create, update and query these arrays all belong to the IO monad:

 import Data.Array.IO
 main = do arr <- newArray (1,10) 37 :: IO (IOArray Int Int)
           a <- readArray arr 1
           writeArray arr 1 64
           b <- readArray arr 1 
           print (a,b)

This program creates an array of 10 elements with all values initially set to 37. Then it reads the first element of the array. After that, the program modifies the first element of the array and then reads it again. The type declaration in the second line is necessary because our little program doesn't provide enough context to allow the compiler to determine the concrete type of `arr`. Unlike examples, real programs rarely need such declarations.


Mutable arrays in ST monad (module Data.Array.ST)

In the same way that IORef has its more general cousin STRef, IOArray has a more general version STArray (and similarly, IOUArray corresponds to STUArray). These array types allow one to work with mutable arrays in the ST monad:

 import Control.Monad.ST
 import Data.Array.ST

 buildPair = do arr <- newArray (1,10) 37 :: ST s (STArray s Int Int)
                a <- readArray arr 1
                writeArray arr 1 64
                b <- readArray arr 1
                return (a,b)

 main = print $ runST buildPair

Believe it or not, now you know all that is needed to use any array type. Unless you are interested in speed issues, just use Array, IOArray and STArray where appropriate. The following topics are almost exclusively about selecting the proper array type to make programs run faster.


DiffArray (module Data.Array.Diff)

Note, as of Jan 2012, DiffArray is not yet ready for production use; it's practical (wall clock) performance does not live up to its theoretical advantages.

As we already stated, the update operation on immutable arrays (IArray) just creates a new copy of the array, which is very inefficient, but it is a pure operation which can be used in pure functions. On the other hand, updates on mutable arrays (MArray) are efficient but can be done only in monadic code. In theory, DiffArray combines the best of both worlds - it supports the IArray interface and therefore can be used in a purely functional way, but internally it uses the efficient update of MArrays.

(In practice, however, DiffArrays are 10-100x slower than MArrays, due to the overhead of maintaining an immmutable interface. See bug report here: [1])

How does this trick work? DiffArray has a pure external interface, but internally it is represented as a reference to an IOArray.

When the '//' operator is applied to a diff array, its contents are physically updated in place. The old array silently changes its representation without changing the visible behavior: it stores a link to the new current array along with the difference to be applied to get the old contents.

So if a diff array is used in a single-threaded style, that is, after '//' application the old version is no longer used, a!i takes O(1) time and a//d takes O(length d). Accessing elements of older versions gradually becomes slower.

Updating an array which is not current makes a physical copy. The resulting array is unlinked from the old family. So you can obtain a version which is guaranteed to be current and thus has fast element access by a//[].

The library provides two "differential" array constructors - DiffArray, made internally from IOArray, and DiffUArray, based on IOUArray. If you really need to, you can construct new "differential" array types from any 'MArray' types living in the 'IO' monad. Since GHC-6.12, DiffArray has been splitted off into separated package due to its "unusably slow". See Hackage documentation for further details.

Usage of DiffArray doesn't differ from that of Array, the only difference is memory consumption and speed:

import Data.Array.Diff

main = do
       let arr = listArray (1,1000) [1..1000] :: DiffArray Int Int
           a = arr ! 1
           arr2 = arr // [(1,37)]
           b = arr2 ! 1
       print (a,b)

You can use 'seq' to force evaluation of array elements prior to updating an array:

import Data.Array.Diff

main = do
       let arr = listArray (1,1000) [1..1000] :: DiffArray Int Int
           a = arr ! 1
           b = arr ! 2
           arr2 = a `seq` b `seq` (arr // [(1,37),(2,64)])
           c = arr2 ! 1
       print (a,b,c)

Unboxed arrays

In most implementations of lazy evaluation, values are represented at runtime as pointers to either their value, or code for computing their value. This extra level of indirection, together with any extra tags needed by the runtime, is known as a box. The default "boxed" arrays consist of many of these boxes, each of which may compute its value separately. This allows for many neat tricks, like recursively defining an array's elements in terms of one another, or only computing the specific elements of the array which are ever needed. However, for large arrays, it costs a lot in terms of overhead, and if the entire array is always needed, it can be a waste.

Unboxed arrays are more like arrays in C - they contain just the plain values without this extra level of indirection, so that, for example, an array of 1024 values of type Int32 will use only 4 kb of memory. Moreover, indexing of such arrays can be significantly faster.

Of course, unboxed arrays have their own disadvantages. First, unboxed arrays can be made only of plain values having a fixed size - Int, Word, Char, Bool, Ptr, Double, etc. (see the full list in the Data.Array.Unboxed module). You can even implement unboxed arrays yourself for other simple types, including enumerations. But Integer, String and any other types defined with variable size cannot be elements of unboxed arrays. Second, without that extra level of indirection, all of the elements in an unboxed array must be evaluated when the array is evaluated, so you lose the benefits of lazy evaluation. Indexing the array to read just one element will construct the entire array. This is not much of a loss if you will eventually need the whole array, but it does prevent recursively defining the array elements in terms of each other, and may be too expensive if you only ever need specific values. Nevertheless, unboxed arrays are a very useful optimization instrument, and I recommend using them as much as possible.

All main array types in the library have unboxed counterparts:

Array - UArray          (module Data.Array.Unboxed)
IOArray - IOUArray      (module Data.Array.IO)
STArray - STUArray      (module Data.Array.ST)
DiffArray - DiffUArray  (module Data.Array.Diff)

So, basically replacing boxed arrays in your program with unboxed ones is very simple - just add 'U' to the type signatures, and you are done! Of course, if you change Array to UArray, you also need to add "Data.Array.Unboxed" to your imports list.


StorableArray (module Data.Array.Storable)

A storable array is an IO-mutable array which stores its contents in a contiguous memory block living in the C heap. Elements are stored according to the class 'Storable'. You can obtain the pointer to the array contents to manipulate elements from languages like C.

It is similar to 'IOUArray' (in particular, it implements the same MArray interface) but slower. The advantage is that it's compatible with C through the foreign function interface. The memory addresses of storable arrays are fixed, so you can pass them to C routines.

The pointer to the array contents is obtained by 'withStorableArray'. The idea is similar to 'ForeignPtr' (used internally here). The pointer should be used only during execution of the 'IO' action returned by the function passed as argument to 'withStorableArray'.

 {-# OPTIONS_GHC -fglasgow-exts #-}
 import Data.Array.Storable
 import Foreign.Ptr
 import Foreign.C.Types
 
 main = do arr <- newArray (1,10) 37 :: IO (StorableArray Int Int)
           a <- readArray arr 1
           withStorableArray arr 
               (\ptr -> memset ptr 0 40)
           b <- readArray arr 1
           print (a,b)
 
 foreign import ccall unsafe "string.h" 
     memset  :: Ptr a -> CInt -> CSize -> IO ()

If you want to use this pointer afterwards, ensure that you call 'touchStorableArray' AFTER the last use of the pointer, so that the array will be not freed too early.


Additional comments: GHC 6.6 made access to 'StorableArray' as fast as to any other unboxed arrays. The only difference between 'StorableArray' and 'UArray' is that UArray lies in relocatable part of GHC heap while 'StorableArray' lies in non-relocatable part and therefore keep the fixed address, what allow to pass this address to the C routines and save it in the C data structures.

GHC 6.6 also adds an 'unsafeForeignPtrToStorableArray' operation that allows the use of any Ptr as the address of a 'StorableArray' and in particular works with arrays returned by C routines. Here is an example of using this operation:

import Data.Array.Storable
import Data.Array.Unsafe
import Foreign.Marshal.Alloc
import Foreign.Marshal.Array
import Foreign.ForeignPtr

main = do ptr <- mallocArray 10
          fptr <- newForeignPtr_ ptr
          arr <- unsafeForeignPtrToStorableArray (1,10) fptr :: IO (StorableArray Int Int)
          writeArray arr 1 64
          a <- readArray arr 1
          print a
          free ptr

This example allocates memory for 10 Ints (which emulates an array returned by some C function), then converts the returned 'Ptr Int' to 'ForeignPtr Int' and 'ForeignPtr Int' to 'StorableArray Int Int'. It then writes and reads the first element of the array. At the end, the memory used by the array is deallocated by 'free', which again emulates deallocation by C routines. We can also enable the automatic freeing of the allocated block by replacing "newForeignPtr_ ptr" with "newForeignPtr finalizerFree ptr". In this case memory will be automatically freed after the last array usage, as for any other Haskell objects.


The Haskell Array Preprocessor (STPP)

Using mutable (IO and ST) arrays in Haskell is not very handy. But there is one tool which adds syntactic sugar to make the use of such arrays very close to that of imperative languages. It is written by Hal Daume III and you can get it at http://hal3.name/STPP/stpp.tar.gz

Using this tool, you can index array elements in arbitrarily complex expressions with the notation "arr[|i|]" and the preprocessor will automatically convert these forms to the appropriate calls to 'readArray' and 'writeArray'. Multi-dimensional arrays are also supported, with indexing in the form "arr[|i|][|j|]". See further descriptions at http://hal3.name/STPP/.

Repa package

Another option for arrays in Haskell which is worth consideration are REgular PArallel arrays (Repa). Repa is a Haskell library for high performance, regular, multi-dimensional parallel arrays. It allows to easily get an advantage from multi-core CPU's. Repa also provides list-like operations on arrays such as map, fold and zipWith, moreover repa arrays are instances of Num, which comes in hand for many applications.

Repa employs a different syntax for arrays, which is also used in an experimental accelerate package. Data.Array.Accelerate is aimed to gain the performance from using GPGPU (via CUDA).

Repa possesses a number of other interesting features, such as exporting/importing arrays from ascii or bmp files. For further information consult repa tutorial.

ArrayRef library

The ArrayRef library reimplements array libraries with the following extensions:

  • dynamic (resizable) arrays
  • polymorphic unboxed arrays

It also adds syntactic sugar which simplifies arrays usage. Although not as elegant as STPP, it is implemented entirely inside the Haskell language without requiring any preprocessors.


Unsafe indexing, freezing/thawing, running over array elements

There are operations that convert between mutable and immutable arrays of the same type, namely 'freeze' (mutable->immutable) and 'thaw' (immutable->mutable). They make a new copy of the array. If you are sure that a mutable array will not be modified or that an immutable array will not be used after the conversion, you can use unsafeFreeze/unsafeThaw. These operations convert array the in-place if the input and resulting arrays have the the same memory representation (i.e. the same type and boxing). Please note that the "unsafe*" operations modify memory - they set/clear a flag in the array header which specifies array mutability. So these operations can't be used together with multi-threaded access to arrays (using threads or some form of coroutines).

There are also operations that convert unboxed arrays to another element type, namely castIOUArray and castSTUArray. These operations rely on the actual type representation in memory and therefore there are no guarantees on their results. In particular, these operations can be used to convert any unboxable value to a sequence of bytes and vice versa. For example, they are used in the AltBinary library to serialize floating-point values. Please note that these operations don't recompute array bounds to reflect any changes in element size. You need to do that yourself using the 'sizeOf' operation.


While arrays can have any type of index, the internal representation only accepts Ints for indexing. The array libraries first use the Ix class to translate the polymorphic index into an Int. An internal indexing function is then called on this Int index. The internal functions are: unsafeAt, unsafeRead and unsafeWrite, found in the Data.Array.Base module. You can use these operations yourself in order to speed up your program by avoiding bounds checking. These functions are marked "unsafe" for good a reason -- they allow the programmer to access and overwrite arbitrary addresses in memory. These operations are especially useful if you need to walk through entire array:

import Data.Array.Base (unsafeAt)
-- | Returns a list of all the elements of an array, in the same order
-- as their indices.
elems arr = [ unsafeAt arr i
                | i <- [0 .. rangeSize(bounds arr)-1] ]

"unsafe*" operations in such loops are really safe because 'i' loops only through positions of existing array elements.


GHC-specific topics

Parallel arrays (module GHC.PArr)

As we already mentioned, array library supports two array varieties - lazy boxed arrays and strict unboxed ones. A parallel array implements something intermediate: it's a strict boxed immutable array. This keeps the flexibility of using any data type as an array element while making both creation of and access to such arrays much faster. Array creation is implemented as one imperative loop that fills all the array elements, while accesses to array elements don't need to check the "box". It should be obvious that parallel arrays are not efficient in cases where the calculation of array elements is relatively complex and most elements will not be used. One more drawback of practical usage is that parallel arrays don't support the IArray interface, which means that you can't write generic algorithms which work both with Array and the parallel array constructor.

Like many GHC extensions, this is described in a paper: An Approach to Fast Arrays in Haskell, by Manuel M. T. Chakravarty and Gabriele Keller.

You can also look at the sources of GHC.PArr module, which contains a lot of comments.

The special syntax for parallel arrays is enabled by "ghc -fparr" or "ghci -fparr" which is undocumented in the GHC 6.4.1 user manual.


Welcome to the machine: Array#, MutableArray#, ByteArray#, MutableByteArray#, pinned and moveable byte arrays

The GHC heap contains two kinds of objects. Some are just byte sequences, while the others are pointers to other objects (so-called "boxes"). This segregation allows the system to find chains of references when performing garbage collection and to update these pointers when memory used by the heap is compacted and objects are moved to new places. The internal (raw) GHC type Array# represents a sequence of object pointers (boxes). There is a low-level operation in the ST monad which allocates an array of specified size in the heap. Its type is something like (Int -> ST s Array#). The Array# type is used inside the Array type which represents boxed immutable arrays.

There is a different type for mutable boxed arrays (IOArray/STArray), namely MutableArray#. A separate type for mutable arrays is required because of the 2-stage garbage collection mechanism. The internal representations of Array# and MutableArray# are the same apart from some flags in header, and this make possible to perform in-place convsion between MutableArray# and Array# (this is that unsafeFreeze and unsafeThaw operations do).

Unboxed arrays are represented by the ByteArray# type. This is just a plain memory area in the Haskell heap, like a C array. There are two primitive operations that create a ByteArray# of specified size. One allocates memory in the normal heap and so this byte array can be moved when garbage collection occurs. This prevents the conversion of a ByteArray# to a plain memory pointer that can be used in C procedures (although it's still possible to pass a current ByteArray# pointer to an "unsafe foreign" procedure if the latter doesn't try to store this pointer somewhere). The second primitive allocates a ByteArray# of a specified size in the "pinned" heap area, which contains objects with a fixed location. Such a byte array will never be moved by garbage collection, so its address can be used as a plain Ptr and shared with the C world. The first way to create ByteArray# is used inside the implementation of all UArray types, while the second way is used in StorableArray (although StorableArray can also point to data allocated by C malloc). Pinned ByteArray# also used in ByteString.

There is also a MutableByteArray# type which is very similar to ByteArray#, but GHC's primitives support only monadic read/write operations for MutableByteArray#, and only pure reads for ByteArray#, as well as the unsafeFreeze/unsafeThaw operations which change appropriate fields in headers of this arrays. This differentiation doesn't make much sense except for additional safety checks.

So, pinned MutableByteArray# or C malloced memory is used inside StorableArray, pinned ByteArray# or C malloced memory - inside ByteString, unpinned MutableByteArray# - inside IOUArray and STUArray, and unpinned ByteArray# is used inside UArray.

The API's of boxed and unboxed arrays API are almost identical:

marr <- alloc n            - allocates a mutable array of the given size
arr  <- unsafeFreeze marr  - converts a mutable array to an immutable one
marr <- unsafeThaw arr     - converts an immutable array to a mutable one
x    <- unsafeRead marr i  - monadic reading of the value with the given index from a mutable array
unsafeWrite marr i x       - monadic writing of the value with the given index from a mutable array
let x = unsafeAt arr i     - pure reading of the value with the given index from an immutable array
(all indices are counted from 0)

Based on these primitive operations, the array library implements indexing with any type and with any lower bound, bounds checking and all other high-level operations. Operations that create immutable arrays just create them as mutable arrays in the ST monad, make all required updates on this array, and then use unsafeFreeze before returning the array from runST. Operations on IO arrays are implemented via operations on ST arrays using the stToIO operation.

Mutable arrays and GC

GHC implements 2-stage GC which is very fast. Minor GC occurs after each 256 kb allocated and scans only this area (plus recent stack frames) when searching for "live" data. This solution uses the fact that normal Haskell data are immutable and therefore any data structures created before the previous minor GC can't point to data structures created after it, since due to immutability, data can contain only "backward" references.

But this simplicity breaks down when we add to the language mutable boxed references (IORef/STRef) and arrays (IOArray/STArray). On each GC, including minor ones, each element in a mutable data structure has to be be scanned because it may have been updated since the last GC and to make it point to data allocated since then.

For programs that contain a lot of data in mutable boxed arrays/references, GC times may easily outweigh the useful computation time. Ironically, one such program is GHC itself. The solution for such programs is to add to a command line option like "+RTS -A10m", which increases the size of minor GC chunks from 256 kb to 10 mb, making minor GC 40 times less frequent. You can see effect of this change by using "+RTS -sstderr" option: "%GC time" should significantly decrease.

There is a way to include this option in your executable so that it will be used automatically on each execution - you should just add to your the following line to your project C source file:

char *ghc_rts_opts = "-A10m";

Of course, you can increase or decrease this value according to your needs.

Increasing "-A" value doesn't comes for free. Aside from the obvious increase in memory usage, execution times (of useful code) will also grow. The default "-A" value is tuned to be close to modern CPU cache sizes, so that most memory references fall inside the cache. When 10 mb of memory are allocated before doing GC, this data locality no longer holds. So increasing "-A" can either increase or decrease program speed. You should try various settings between 64 kb and 16 mb while the running program with "typical" parameters and try to select the best setting for your specific program and CPU combination.

There is also another way to avoid increasing GC times: use either unboxed or immutable arrays. Also note that immutable arrays are built as mutable ones and then "frozen", so during the construction time GC will also scan their contents.

Hopefully, GHC 6.6 has fixed the problem - it remembers which references/arrays were updated since last GC and scans only them. You can suffer from the old problems only if you use very large arrays.

Further information:


Notes for contributors to this page

if you have any questions, please ask at the IRC/mailing list. If you have any answers, please submit them directly to this page. please don't sign your contributions, so that anyone will feel free to further improve this page. but if you are compiler/Array libraries author - please sign your text to let us know that it is the Last Word of Truth :-)