!`      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~     MPI bindings for Haskell(C) 2018 Erik Schnetter Apache-2.0$Erik Schnetter <schnetter@gmail.com> experimental,Requires an externally installed MPI libraryNone06;<=FKTV^finmpi-hsA type class mapping Haskell types to MPI datatypes. This is used to automatically determine the MPI datatype for communication buffers.mpi-hs#Thread support levels for MPI (see g): (MPI_THREAD_SINGLE): The application must besingle-threaded (MPI_THREAD_FUNNELEDU): The application might be multi-threaded, but only a single thread will call MPI (MPI_THREAD_SERIALIZED}): The application might be multi-threaded, but the application guarantees that only one thread at a time will call MPI (MPI_THREAD_MULTIPLEa): The application is multi-threaded, and different threads might call MPI at the same timempi-hsA newtype wrapper describing a message tag. A tag defines a sub-channel within a communicator. While communicators are heavy-weight object that are expensive to set up and tear down, a tag is a lightweight mechanism using an integer. Use & and ' to convert between  and other enum types. (4 defines a standard tag that can be used as default. mpi-hsAn MPI status, wrapping  MPI_Statusy. The status describes certain properties of a message. It contains information such as the source of a communication ($), the message tag (%), or the size of the message (Y, Z).In many cases, the status is not interesting. In this case, you can use alternative functions ending with an underscore (e.g. r") that do not calculate a status.2The status is particularly interesting when using o or h_, as it describes a message that is ready to be received, but which has not been received yet. mpi-hsAn MPI request, wrapping  MPI_Requestq. A request describes a communication that is currently in progress. Each request must be explicitly freed via cancel, {, or }. mpi-hsA newtype wrapper describing the source or destination of a message, i.e. a process. Each communicator numbers its processes sequentially starting from zero. Use ! and " to convert between   and other integral types. #0 is the root (first) process of a communicator.The association between a rank and a communicator is not explicitly tracked. From MPI's point of view, ranks are simply integers. The same rank might correspond to different processes in different communicators.mpi-hs%An MPI reduction operation, wrapping MPI_Op. Reduction operations need to be explicitly created and freed by the MPI library. Predefined operation exist for simple semigroups such as sum, maximum, or minimum.}An MPI reduction operation corresponds to a Semigroup, not a Monoid, i.e. MPI has no notion of a respective neutral element.mpi-hsAn MPI datatype, wrapping  MPI_Datatype. Datatypes need to be explicitly created and freed by the MPI library. Predefined datatypes exist for most simple C types such as  or .mpi-hs8A newtype wrapper describing the size of a message. Use  and   to convert between  and other integral types.mpi-hs2The result of comparing two MPI communicator (see R).mpi-hsAn MPI communicator, wrapping MPI_Comm. A communicator defines an independent communication channel between a group of processes. Communicators need to be explicitly created and freed by the MPI library. ,O is a communicator that is always available, and which includes all processes.mpi-hs:A generic pointer-like type that supports converting to a , and which knows the type and number of its elements. This class describes the MPI buffers used to send and receive messages.mpi-hsConvert an integer to a count. mpi-hsConvert a count to an integer.!mpi-hsConvert an enum to a rank."mpi-hsConvert a rank to an enum.#mpi-hs(The root (first) rank of a communicator.$mpi-hs"Get the source rank of a message ( MPI_SOURCE).%mpi-hsGet the message tag (MPI_TAG).&mpi-hsConvert an enum to a tag.'mpi-hsConvert a tag to an enum.(mpi-hsUseful default tag.*mpi-hsA null (invalid) communicator ( MPI_COMM_NULL).+mpi-hsThe self communicator ( MPI_COMM_SELFO). Each process has its own self communicator that includes only this process.,mpi-hs7The world communicator, which includes all processes (MPI_COMM_WORLD).-mpi-hsError value returned by Yl if the message is too large, or if the message size is not an integer multiple of the provided datatype ( MPI_UNDEFINED)..mpi-hsA null (invalid) datatype./mpi-hs%MPI datatype for a byte (essentially ) (MPI_BYTE).0mpi-hsMPI datatype for  (MPI_CHAR).1mpi-hsMPI datatype for  ( MPI_DOUBLE).2mpi-hsMPI datatype for  ( MPI_FLOAT).3mpi-hsMPI datatype for  (MPI_INT).4mpi-hsMPI datatype for  (MPI_LONG).5mpi-hs+MPI datatype for the C type 'long double' (MPI_LONG_DOUBLE).6mpi-hsMPI datatype for  (MPI_LONG_LONG_INT-). (There is no MPI datatype for 'CULLong@).7mpi-hsMPI datatype for  ( MPI_SHORT).8mpi-hsMPI datatype for  ( MPI_UNSIGNED).9mpi-hsMPI datatype for  (MPI_UNSIGNED_CHAR).:mpi-hsMPI datatype for  (MPI_UNSIGNED_LONG).;mpi-hsMPI datatype for  (MPI_UNSIGNED_SHORT).<mpi-hs&A null (invalid) reduction operation ( MPI_OP_NULL).=mpi-hsThe bitwise and (.&.) reduction operation (MPI_BAND).>mpi-hsThe bitwise or (.|.) reduction operation (MPI_BOR).?mpi-hs The bitwise (xor) reduction operation (MPI_BXOR).@mpi-hsThe logical and (&&) reduction operation (MPI_LAND).Ampi-hsThe logical or (||) reduction operation (MPI_LOR).Bmpi-hs%The logical xor reduction operation (MPI_LXOR).Cmpi-hsThe  reduction operation (MPI_MAX).Dmpi-hsBThe argmax reduction operation to find the maximum and its rank ( MPI_MAXLOC).Empi-hsThe  reduction operation (MPI_MIN).Fmpi-hsBThe argmin reduction operation to find the minimum and its rank ( MPI_MINLOC).Gmpi-hsThe (product) reduction operation (MPI_PROD).Hmpi-hsThe (sum) reduction operation (MPI_SUM).Impi-hsMRank placeholder to specify that a message can be received from any source (MPI_ANY_SOURCE). When calling o or q (or h or j) with IV as source, the actual source can be determined from the returned message status via $.Jmpi-hsA null (invalid) request (MPI_REQUEST_NULL).Kmpi-hs=Tag placeholder to specify that a message can have any tag ( MPI_ANY_TAG). When calling o or q (or h or j) with KP as tag, the actual tag can be determined from the returned message status via %.Lmpi-hs&Terminate MPI execution environment ( 9https://www.open-mpi.org/doc/current/man3/MPI_Abort.3.php MPI_Abort).Mmpi-hsGGather data from all processes and broadcast the result (collective,  =https://www.open-mpi.org/doc/current/man3/MPI_Allgather.3.php MPI_Allgather).Nmpi-hsGReduce data from all processes and broadcast the result (collective,  =https://www.open-mpi.org/doc/current/man3/MPI_Allreduce.3.php MPI_AllreduceP). The MPI datatype is determined automatically from the buffer pointer types.Ompi-hs<Send data from all processes to all processes (collective,  :https://www.open-mpi.org/doc/current/man3/MPI_Alltoall.php MPI_AlltoallR). The MPI datatypes are determined automatically from the buffer pointer types.Pmpi-hsBarrier (collective,  ;https://www.open-mpi.org/doc/current/man3/MPI_Barrier.3.php MPI_Barrier).Qmpi-hs?Broadcast data from one process to all processes (collective,  9https://www.open-mpi.org/doc/current/man3/MPI_Bcast.3.php MPI_BcastO). The MPI datatype is determined automatically from the buffer pointer type.Rmpi-hsCompare two communicators ( @https://www.open-mpi.org/doc/current/man3/MPI_Comm_compare.3.phpMPI_Comm_compare).Smpi-hs/Return this process's rank in a communicator ( =https://www.open-mpi.org/doc/current/man3/MPI_Comm_rank.3.php MPI_Comm_rank).Tmpi-hs3Return the number of processes in a communicator ( =https://www.open-mpi.org/doc/current/man3/MPI_Comm_size.3.php MPI_Comm_size).Umpi-hsMReduce data from all processes via an exclusive (prefix) scan (collective,  :https://www.open-mpi.org/doc/current/man3/MPI_Exscan.3.php MPI_Exscan). Each process with rank r1 receives the result of reducing data from rank 0 to rank r-1 (inclusive). Rank 0 should logically receive a neutral element of the reduction operation, but instead receives an undefined value since MPI is not aware of neutral values for reductions.KThe MPI datatype is determined automatically from the buffer pointer type.Vmpi-hs2Finalize (shut down) the MPI library (collective,  <https://www.open-mpi.org/doc/current/man3/MPI_Finalize.3.php MPI_Finalize).Wmpi-hs4Return whether the MPI library has been finalized ( =https://www.open-mpi.org/doc/current/man3/MPI_Finalized.3.php MPI_Finalized).Xmpi-hsAGather data from all processes to the root process (collective,  :https://www.open-mpi.org/doc/current/man3/MPI_Gather.3.php MPI_GatherR). The MPI datatypes are determined automatically from the buffer pointer types.Ympi-hs7Get the size of a message, in terms of objects of type  ( =https://www.open-mpi.org/doc/current/man3/MPI_Get_count.3.php MPI_Get_countA). To determine the MPI datatype for a given Haskell type, use datatype! (call e.g. as 'datatype @CInt').Zmpi-hsKGet the number of elements in message, in terms of sub-object of the type datatype ( @https://www.open-mpi.org/doc/current/man3/MPI_Get_elements.3.phpMPI_Get_elementsD). This is useful when a message contains partial objects of type datatype?. To determine the MPI datatype for a given Haskell type, use datatype! (call e.g. as 'datatype @CInt').[mpi-hs(Return the version of the MPI library ( Ghttps://www.open-mpi.org/doc/current/man3/MPI_Get_library_version.3.phpMPI_Get_library_version[). Note that the version of the MPI standard that this library implements is returned by ].\mpi-hs)Return the name of the current process ( Fhttps://www.open-mpi.org/doc/current/man3/MPI_Get_processor_name.3.phpMPI_Get_Processor_nameQ). This should uniquely identify the hardware on which this process is running.]mpi-hsFReturn the version of the MPI standard implemented by this library ( ?https://www.open-mpi.org/doc/current/man3/MPI_Get_version.3.phpMPI_Get_versionD). Note that the version of the MPI library itself is returned by [.^mpi-hsBegin to gather data from all processes and broadcast the result, and return a handle to the communication request (collective, non-blocking,  >https://www.open-mpi.org/doc/current/man3/MPI_Iallgather.3.phpMPI_Iallgather)). The request must be freed by calling {, }], or similar. The MPI datatypes are determined automatically from the buffer pointer types._mpi-hsBegin to reduce data from all processes and broadcast the result, and return a handle to the communication request (collective, non-blocking,  >https://www.open-mpi.org/doc/current/man3/MPI_Iallreduce.3.phpMPI_Iallreduce)). The request must be freed by calling {, }[, or similar. The MPI datatype is determined automatically from the buffer pointer types.`mpi-hsBegin to send data from all processes to all processes, and return a handle to the communication request (collective, non-blocking,  ;https://www.open-mpi.org/doc/current/man3/MPI_Ialltoall.php MPI_Ialltoall)). The request must be freed by calling {, }], or similar. The MPI datatypes are determined automatically from the buffer pointer types.ampi-hs_Start a barrier, and return a handle to the communication request (collective, non-blocking,  <https://www.open-mpi.org/doc/current/man3/MPI_Ibarrier.3.php MPI_Ibarrier)). The request must be freed by calling {, } , or similar.bmpi-hsBegin to broadcast data from one process to all processes, and return a handle to the communication request (collective, non-blocking,  :https://www.open-mpi.org/doc/current/man3/MPI_Ibcast.3.php MPI_Ibcast)). The request must be freed by calling {, }Z, or similar. The MPI datatype is determined automatically from the buffer pointer type.cmpi-hsBegin to reduce data from all processes via an exclusive (prefix) scan, and return a handle to the communication request (collective, non-blocking,  ;https://www.open-mpi.org/doc/current/man3/MPI_Iexscan.3.php MPI_Iexscan). Each process with rank r1 receives the result of reducing data from rank 0 to rank r-1 (inclusive). Rank 0 should logically receive a neutral element of the reduction operation, but instead receives an undefined value since MPI is not aware of neutral values for reductions.%The request must be freed by calling {, }Z, or similar. The MPI datatype is determined automatically from the buffer pointer type.dmpi-hsBegin to gather data from all processes to the root process, and return a handle to the communication request (collective, non-blocking,  ;https://www.open-mpi.org/doc/current/man3/MPI_Igather.3.php MPI_Igather)). The request must be freed by calling {, }], or similar. The MPI datatypes are determined automatically from the buffer pointer types.empi-hs6Return whether the MPI library has been initialized ( ?https://www.open-mpi.org/doc/current/man3/MPI_Initialized.3.phpMPI_Initialized).fmpi-hs)Initialize the MPI library (collective,  8https://www.open-mpi.org/doc/current/man3/MPI_Init.3.phpMPI_Init ). This corresponds to calling g .gmpi-hs)Initialize the MPI library (collective,  ?https://www.open-mpi.org/doc/current/man3/MPI_Init_thread.3.phpMPI_Init_thread^). Note that the provided level of thread support might be less than (!) the required level.hmpi-hsEProbe (check) for incoming messages without waiting (non-blocking,  :https://www.open-mpi.org/doc/current/man3/MPI_Iprobe.3.php MPI_Iprobe).impi-hs8Probe (check) for an incoming message without waiting ( :https://www.open-mpi.org/doc/current/man3/MPI_Iprobe.3.php MPI_Iprobeg). This function does not return a status, which might be more efficient if the status is not needed.jmpi-hs^Begin to receive a message, and return a handle to the communication request (non-blocking,  9https://www.open-mpi.org/doc/current/man3/MPI_Irecv.3.php MPI_Irecv)). The request must be freed by calling {, }Z, or similar. The MPI datatype is determined automatically from the buffer pointer type.kmpi-hswBegin to reduce data from all processes, and return a handle to the communication request (collective, non-blocking,  ;https://www.open-mpi.org/doc/current/man3/MPI_Ireduce.3.php MPI_Ireduce\). The result is only available on the root process. The request must be freed by calling {, }\, or similar. The MPI datatypes are determined automatically from the buffer pointer types.lmpi-hsBegin to reduce data from all processes via an (inclusive) scan, and return a handle to the communication request (collective, non-blocking,  9https://www.open-mpi.org/doc/current/man3/MPI_Iscan.3.php MPI_Iscan). Each process with rank r1 receives the result of reducing data from rank 0 to rank r4 (inclusive). The request must be freed by calling {, }Y, or similar. The MPI datatype is determined automatically from the buffer pointer type.mmpi-hsBegin to scatter data from the root process to all processes, and return a handle to the communication request (collective, non-blocking,  <https://www.open-mpi.org/doc/current/man3/MPI_Iscatter.3.php MPI_Iscatter)). The request must be freed by calling {, }], or similar. The MPI datatypes are determined automatically from the buffer pointer types.nmpi-hs[Begin to send a message, and return a handle to the communication request (non-blocking,  9https://www.open-mpi.org/doc/current/man3/MPI_Isend.3.php MPI_Isend)). The request must be freed by calling {, }Z, or similar. The MPI datatype is determined automatically from the buffer pointer type.ompi-hs'Probe (wait) for an incoming message ( 9https://www.open-mpi.org/doc/current/man3/MPI_Probe.3.php MPI_Probe).pmpi-hs'Probe (wait) for an incoming message ( 9https://www.open-mpi.org/doc/current/man3/MPI_Probe.3.php MPI_Probeg). This function does not return a status, which might be more efficient if the status is not needed.qmpi-hsReceive a message ( 8https://www.open-mpi.org/doc/current/man3/MPI_Recv.3.phpMPI_RecvN). The MPI datatypeis determined automatically from the buffer pointer type.rmpi-hsReceive a message ( 8https://www.open-mpi.org/doc/current/man3/MPI_Recv.3.phpMPI_Recv). The MPI datatype is determined automatically from the buffer pointer type. This function does not return a status, which might be more efficient if the status is not needed.smpi-hs-Reduce data from all processes (collective,  :https://www.open-mpi.org/doc/current/man3/MPI_Reduce.3.php MPI_Reduce). The result is only available on the root process. The MPI datatypes are determined automatically from the buffer pointer types.tmpi-hsYCheck whether a communication has completed without freeing the communication request ( Fhttps://www.open-mpi.org/doc/current/man3/MPI_Request_get_status.3.phpMPI_Request_get_status).umpi-hsYCheck whether a communication has completed without freeing the communication request ( Fhttps://www.open-mpi.org/doc/current/man3/MPI_Request_get_status.3.phpMPI_Request_get_statusg). This function does not return a status, which might be more efficient if the status is not needed.vmpi-hsHReduce data from all processes via an (inclusive) scan (collective,  8https://www.open-mpi.org/doc/current/man3/MPI_Scan.3.phpMPI_Scan). Each process with rank r2 receives the result of reducing data from rank 0 to rank rZ (inclusive). The MPI datatype is determined automatically from the buffer pointer type.wmpi-hsBScatter data from the root process to all processes (collective,  ;https://www.open-mpi.org/doc/current/man3/MPI_Scatter.3.php MPI_ScatterR). The MPI datatypes are determined automatically from the buffer pointer types.xmpi-hsSend a message ( 8https://www.open-mpi.org/doc/current/man3/MPI_Send.3.phpMPI_SendO). The MPI datatype is determined automatically from the buffer pointer type.ympi-hs0Send and receive a message with a single call ( <https://www.open-mpi.org/doc/current/man3/MPI_Sendrecv.3.php MPI_SendrecvR). The MPI datatypes are determined automatically from the buffer pointer types.zmpi-hs0Send and receive a message with a single call ( <https://www.open-mpi.org/doc/current/man3/MPI_Sendrecv.3.php MPI_Sendrecv). The MPI datatypes are determined automatically from the buffer pointer types. This function does not return a status, which might be more efficient if the status is not needed.{mpi-hsYCheck whether a communication has completed, and free the communication request if so ( 8https://www.open-mpi.org/doc/current/man3/MPI_Test.3.phpMPI_Test).|mpi-hsYCheck whether a communication has completed, and free the communication request if so ( 8https://www.open-mpi.org/doc/current/man3/MPI_Test.3.phpMPI_Testg). This function does not return a status, which might be more efficient if the status is not needed.}mpi-hsIWait for a communication request to complete, then free the request ( 8https://www.open-mpi.org/doc/current/man3/MPI_Wait.3.phpMPI_Wait).~mpi-hsIWait for a communication request to complete, then free the request ( 8https://www.open-mpi.org/doc/current/man3/MPI_Wait.3.phpMPI_Waitg). This function does not return a status, which might be more efficient if the status is not needed.mpi-hsWall time tick (accuracy of ) (in seconds)mpi-hsCurrent wall time (in seconds)mpi-hsWhen MPI is initialized with this library, then it will remember the provided level of thread support. (This might be less than the requested level.),Lmpi-hs5Communicator describing which processes to terminatempi-hs Error codeMmpi-hs Source buffermpi-hsDestination buffermpi-hs CommunicatorNmpi-hs Source buffermpi-hsDestination buffermpi-hsReduction operationmpi-hs CommunicatorOmpi-hs Source buffermpi-hsDestination buffermpi-hs CommunicatorPmpi-hs CommunicatorQmpi-hsBBuffer (read on the root process, written on all other processes)mpi-hsRoot rank (sending process)mpi-hs CommunicatorRmpi-hs Communicatormpi-hsOther communicatorSmpi-hs CommunicatorTmpi-hs CommunicatorUmpi-hs Source buffermpi-hsDestination buffermpi-hsReduction operationmpi-hs CommunicatorXmpi-hs Source buffermpi-hs2Destination buffer (only used on the root process)mpi-hs Root rankmpi-hs CommunicatorYmpi-hsMessage statusmpi-hs MPI datatypeZmpi-hsMessage statusmpi-hs MPI datatype^mpi-hs Source buffermpi-hsDestination buffermpi-hs Communicatormpi-hsCommunication request_mpi-hs Source buffermpi-hsDestination buffermpi-hsReduction operationmpi-hs Communicatormpi-hsCommunication request`mpi-hs Source buffermpi-hsDestination buffermpi-hs Communicatormpi-hsCommunication requestampi-hs Communicatorbmpi-hsBBuffer (read on the root process, written on all other processes)mpi-hsRoot rank (sending process)mpi-hs Communicatormpi-hsCommunication requestcmpi-hs Source buffermpi-hsDestination buffermpi-hsReduction operationmpi-hs Communicatormpi-hsCommunication requestdmpi-hs Source buffermpi-hs6Destination buffer (relevant only on the root process)mpi-hs Root rankmpi-hs Communicatormpi-hsCommunication requestgmpi-hs required level of thread supportmpi-hs provided level of thread supporthmpi-hsSource rank (may be I)mpi-hsMessage tag (may be K)mpi-hs Communicatormpi-hs  1 of the message if a message is available, else impi-hsSource rank (may be I)mpi-hsMessage tag (may be K)mpi-hs Communicatormpi-hsWhether a message is availablejmpi-hsReceive buffermpi-hsSource rank (may be I)mpi-hsMessage tag (may be K)mpi-hs Communicatormpi-hsCommunication requestkmpi-hs Source buffermpi-hsDestination buffermpi-hsReduction operationmpi-hs Root rankmpi-hs Communicatormpi-hsCommunication requestlmpi-hs Source buffermpi-hsDestination buffermpi-hsReduction operationmpi-hs Communicatormpi-hsCommunication requestmmpi-hs-Source buffer (only used on the root process)mpi-hsDestination buffermpi-hs Root rankmpi-hs Communicatormpi-hsCommunication requestnmpi-hs Send buffermpi-hsDestination rankmpi-hs Message tagmpi-hs Communicatormpi-hsCommunication requestompi-hsSource rank (may be I)mpi-hsMessage tag (may be K)mpi-hs Communicatormpi-hsMessage statuspmpi-hsSource rank (may be I)mpi-hsMessage tag (may be K)mpi-hs Communicatorqmpi-hsReceive buffermpi-hsSource rank (may be I)mpi-hsMessage tag (may be K)mpi-hs Communicatormpi-hsMessage statusrmpi-hsReceive buffermpi-hsSource rank (may be I)mpi-hsMessage tag (may be K)mpi-hs Communicatorsmpi-hs Source buffermpi-hsDestination buffermpi-hsReduction operationmpi-hs Root rankmpi-hs Communicatortmpi-hsCommunication requestmpi-hs  & if the request has completed, else umpi-hsCommunication requestmpi-hs!Whether the request had completedvmpi-hs Source buffermpi-hsDestination buffermpi-hsReduction operationmpi-hs Communicatorwmpi-hs-Source buffer (only used on the root process)mpi-hsDestination buffermpi-hs Root rankmpi-hs Communicatorxmpi-hs Send buffermpi-hsDestination rankmpi-hs Message tagmpi-hs Communicatorympi-hs Send buffermpi-hsDestination rankmpi-hsSent message tagmpi-hsReceive buffermpi-hsSource rank (may be I)mpi-hsReceived message tag (may be K)mpi-hs Communicatormpi-hsStatus for received messagezmpi-hs Send buffermpi-hsDestination rankmpi-hsSent message tagmpi-hsReceive buffermpi-hsSource rank (may be I)mpi-hsReceived message tag (may be K)mpi-hs Communicator{mpi-hsCommunication requestmpi-hs  % if the request has completed, else |mpi-hsCommunication requestmpi-hs!Whether the request had completed}mpi-hsCommunication requestmpi-hsMessage status~mpi-hsCommunication request  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~RST*+, -./0123456789:;<=>?@ABCDEFGH "#!I J $%YZ'&(K)LVWfge[\]opqrxyz}~hijntu{|MNOPQUXsvw^_`abcdklmWSimplified MPI bindings with automatic serialization based on GHC.Packing(C) 2018 Erik Schnetter Apache-2.0$Erik Schnetter <schnetter@gmail.com> experimental,Requires an externally installed MPI libraryNone+impi-hsfThe status of a finished communication, indicating rank and tag of the other communication end point.mpi-hsSA communication request, usually created by a non-blocking communication function.mpi-hs3Exception type indicating an error in a call to MPImpi-hsxRun the supplied Maybe computation repeatedly while it returns Nothing. If it returns a value, then returns that value.mpi-hsPConvenience function to initialize and finalize MPI. This initializes MPI with ThreadMultiple thread support.mpi-hsReceive an object.mpi-hs-Receive an object without returning a status.mpi-hsSend an object.mpi-hs(Send and receive objects simultaneously.mpi-hs^Send and receive objects simultaneously, without returning a status for the received message.mpi-hs!Begin to receive an object. Call  or A to finish the communication, and to obtain the received object.mpi-hsBegin to send an object. Call  or  to finish the communication.mpi-hsWCheck whether a communication has finished, and return the communication result if so.mpi-hs{Check whether a communication has finished, and return the communication result if so, without returning a message status.mpi-hsHWait for a communication to finish and return the communication result.mpi-hslWait for a communication to finish and return the communication result, without returning a message status.mpi-hsBroadcast a message from one process (the "root") to all other processes in the communicator. Call this function on all non-root processes. Call  instead on the root process.mpi-hsBroadcast a message from one process (the "root") to all other processes in the communicator. Call this function on the root process. Call # instead on all non-root processes.mpi-hsBegin a barrier. Call  or  to finish the communication. mpi-hs3action to run with MPI, typically the whole programmpi-hs Source rankmpi-hs Source tagmpi-hs Communicatormpi-hs"Message status and received objectmpi-hs Source rankmpi-hs Source tagmpi-hs Communicatormpi-hsReceived objectmpi-hsObject to sendmpi-hsDestination rankmpi-hs Message tagmpi-hs Communicatormpi-hsObject to sendmpi-hsDestination rankmpi-hsSend message tagmpi-hs Source rankmpi-hsReceive message tagmpi-hs Communicatormpi-hs"Message status and received objectmpi-hsObject to sendmpi-hsDestination rankmpi-hsSend message tagmpi-hs Source rankmpi-hsReceive message tagmpi-hs Communicatormpi-hsReceived objectmpi-hs Source rankmpi-hs Source tagmpi-hs Communicatormpi-hsCommunication requestmpi-hsObject to sendmpi-hsDestination rankmpi-hs Message tagmpi-hs Communicatormpi-hsCommunication requestmpi-hsCommunication requestmpi-hs< communication result, if communication has finished, else mpi-hsCommunication requestmpi-hs< communication result, if communication has finished, else mpi-hsCommunication requestmpi-hs'Message status and communication resultmpi-hsCommunication requestmpi-hsCommunication result.  !"#&'(+,IKLPST.+,  IST"#!K'&(LPVSimplified MPI bindings with automatic serialization based on Data.Store(C) 2018 Erik Schnetter Apache-2.0$Erik Schnetter <schnetter@gmail.com> experimental,Requires an externally installed MPI libraryNone+iGmpi-hsfThe status of a finished communication, indicating rank and tag of the other communication end point.mpi-hsSA communication request, usually created by a non-blocking communication function.mpi-hs3Exception type indicating an error in a call to MPI mpi-hsxRun the supplied Maybe computation repeatedly while it returns Nothing. If it returns a value, then returns that value.mpi-hsPConvenience function to initialize and finalize MPI. This initializes MPI with ThreadMultiple thread support.mpi-hsReceive an object.mpi-hs-Receive an object without returning a status.mpi-hsSend an object.mpi-hs(Send and receive objects simultaneously.mpi-hs^Send and receive objects simultaneously, without returning a status for the received message.mpi-hs!Begin to receive an object. Call  or A to finish the communication, and to obtain the received object.mpi-hsBegin to send an object. Call  or  to finish the communication.mpi-hsWCheck whether a communication has finished, and return the communication result if so.mpi-hs{Check whether a communication has finished, and return the communication result if so, without returning a message status.mpi-hsHWait for a communication to finish and return the communication result.mpi-hslWait for a communication to finish and return the communication result, without returning a message status.mpi-hsBroadcast a message from one process (the "root") to all other processes in the communicator. Call this function on all non-root processes. Call  instead on the root process.mpi-hsBroadcast a message from one process (the "root") to all other processes in the communicator. Call this function on the root process. Call # instead on all non-root processes.mpi-hsBegin a barrier. Call  or  to finish the communication. mpi-hs3action to run with MPI, typically the whole programmpi-hs Source rankmpi-hs Source tagmpi-hs Communicatormpi-hs"Message status and received objectmpi-hs Source rankmpi-hs Source tagmpi-hs Communicatormpi-hsReceived objectmpi-hsObject to sendmpi-hsDestination rankmpi-hs Message tagmpi-hs Communicatormpi-hsObject to sendmpi-hsDestination rankmpi-hsSend message tagmpi-hs Source rankmpi-hsReceive message tagmpi-hs Communicatormpi-hs"Message status and received objectmpi-hsObject to sendmpi-hsDestination rankmpi-hsSend message tagmpi-hs Source rankmpi-hsReceive message tagmpi-hs Communicatormpi-hsReceived objectmpi-hs Source rankmpi-hs Source tagmpi-hs Communicatormpi-hsCommunication requestmpi-hsObject to sendmpi-hsDestination rankmpi-hs Message tagmpi-hs Communicatormpi-hsCommunication requestmpi-hsCommunication requestmpi-hs< communication result, if communication has finished, else mpi-hsCommunication requestmpi-hs< communication result, if communication has finished, else mpi-hsCommunication requestmpi-hs'Message status and communication resultmpi-hsCommunication requestmpi-hsCommunication result.  !"#&'(+,IKLPST.+,  IST"#!K'&(LPSafe!"#$%&'()        !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~  nouvwgkxyz{^  nouvwgkxyz{^%mpi-hs-0.4.0.0-L8QaDPL3MWD6ko0MwvSq4KControl.Distributed.MPIControl.Distributed.MPI.PackingControl.Distributed.MPI.Store Paths_mpi_hs HasDatatype getDatatype ThreadSupport ThreadSingleThreadFunneledThreadSerializedThreadMultipleTagStatusRequestRankOpDatatypeCountComparisonResult Identical CongruentSimilarUnequalCommBufferElemwithPtrLenTypetoCount fromCounttoRankfromRankrootRank getSourcegetTagtoTagfromTagunitTag threadSupportcommNullcommSelf commWorldcountUndefined datatypeNull datatypeByte datatypeChardatatypeDouble datatypeFloat datatypeInt datatypeLongdatatypeLongDoubledatatypeLongLongInt datatypeShortdatatypeUnsigneddatatypeUnsignedChardatatypeUnsignedLongdatatypeUnsignedShortopNullopBandopBoropBxoropLandopLoropLxoropMaxopMaxlocopMinopMinlocopProdopSum anySource requestNullanyTagabort allgather allreducealltoallbarrierbcast commComparecommRankcommSizeexscanfinalize finalizedgathergetCount getElementsgetLibraryVersiongetProcessorName getVersion iallgather iallreduce ialltoallibarrieribcastiexscanigather initializedinit initThreadiprobeiprobe_irecvireduceiscaniscatterisendprobeprobe_recvrecv_reducerequestGetStatusrequestGetStatus_scanscattersendsendrecv sendrecv_testtest_waitwait_wtickwtime$fStoreComparisonResult$fEnumComparisonResult $fStoreCount $fShowCount $fReadCount$fBufferByteString $fStoreRank$fIxRank $fShowRank $fReadRank $fStoreTag$fStoreThreadSupport$fEnumThreadSupport$fHasDatatypeMin$fHasDatatypeMax$fHasDatatypeSum$fHasDatatypeProduct$fHasDatatypeCUShort$fHasDatatypeCULong$fHasDatatypeCUInt$fHasDatatypeCUChar$fHasDatatypeCShort$fHasDatatypeCLong$fHasDatatypeCLLong$fHasDatatypeCInt$fHasDatatypeCFloat$fHasDatatypeCDouble$fHasDatatypeCChar $fBuffer(,) $fBuffer(,)0 $fBuffer(,)1$fEqComparisonResult$fOrdComparisonResult$fReadComparisonResult$fShowComparisonResult$fGenericComparisonResult $fEqCount $fOrdCount $fEnumCount$fGenericCount$fIntegralCount $fNumCount $fRealCount$fStorableCount$fEqRank $fOrdRank $fEnumRank$fIntegralRank $fNumRank $fRealRank$fStorableRank $fGenericRank$fEqTag$fOrdTag $fReadTag $fShowTag $fGenericTag $fEnumTag$fNumTag $fStorableTag$fEqThreadSupport$fOrdThreadSupport$fReadThreadSupport$fShowThreadSupport$fGenericThreadSupport $fShowStatus $fOrdStatus $fEqStatus $fShowRequest $fOrdRequest $fEqRequest$fShowOp$fOrdOp$fEqOp$fShowDatatype $fOrdDatatype $fEqDatatype $fShowComm $fOrdComm$fEqCommmsgRankmsgTag MPIExceptionmainMPI bcastRecv bcastSend$fExceptionMPIException$fEqMPIException$fOrdMPIException$fReadMPIException$fShowMPIException $fReadStatusbaseForeign.C.TypesCIntCDoubleGHC.PtrPtrCUCharCCharCFloatCLongCLLongCShortCUIntCULongCUShort Data.FoldablemaximumminimumGHC.BaseJustNothing whileNothingversion getBinDir getLibDir getDynLibDir getDataDir getLibexecDir getSysconfDirgetDataFileName