,P      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNONonePA P: identifies a node (that is, an OS process running HdpH).  A P> should be thought of as an abstract identifier (though it is 8 not currently abstract) which instantiates the classes Q, R,  S, T and U. VWPXVWPVWPX Safe-InferredYZ[\]^Y[\]^YZ[\]^ Safe-Inferred_`abcdefghijklmnopqrstuvwxyz{_aefghijklmnopqrstuvwxyz_`abcdefghijklmnopqrstuvwxyz{None|}~|}~|}~None NoneVWP None# None None NoneNone  None 8 is a record data type collecting a number of parameter 5 governing the behaviour of the HdpH runtime system. (Debug level, a number defined in module  'Control.Parallel.HdpH.Internal.Location. 2 Default is 0 (corresponding to no debug output). @Number of concurrent schedulers per node. Must be positive and  should be <=2 to the number of HECs (as set by GHC RTS option  -N). Default is 1. 8Interval in microseconds to wake up sleeping schedulers > (which is necessary to recover from a race condition between , concurrent schedulers). Must be positive. 3 Default is 1000 (corresponding to 1 millisecond). ANumber of hops a FISH message may travel before being considered - failed. Must be non-negative. Default is 7. ALow sparkpool watermark for fishing. RTS will send FISH message + unless size of spark pool is greater than  (or unless / a FISH is outstanding). Must be non-negative;  should be < . Default is 1. BLow sparkpool watermark for scheduling. RTS will respond to FISH B messages by SCHEDULEing sparks unless size of spark pool is less  than ". Must be non-negative; should be > .  Default is 2. :After a failed FISH, minimal delay in microseconds before C sending another FISH message; the actual delay is chosen randomly  between  and  . Must be non-negative; should  be <=  . 6 Default is 10000 (corresponding to 10 milliseconds). :After a failed FISH, maximal delay in microseconds before C sending another FISH message; the actual delay is chosen randomly  between  and  . Must be non-negative; should  be >= . 1 Default is 1000000 (corresponding to 1 second). =Number of nodes constituting the distributed runtime system. ! Must be positive. Default is 1. 0Network interface, required to autodetect a node's < IP address. The string must be one of the interface names  returned by the POSIX command ifconfig.  Default is eth02 (corresponding to the first Ethernet interface). 1Default runtime system configuration parameters.      None Internal use only: debug level )Block until receiving a shutdown signal.  Called from , this closes + all connections, the local endpoint, and * then the transport layer, in that order !Used in an unclean shutdown when & the connection with the master node  has been unexpectedly closed. Used during shutdown, to close ( connections with all processes safely (Initiate a shutdown from master process 3Connection with the master process has been closed & unexpectedly, close transport layer 9Used to setup and store a Map of NodeId -> NT.Connection % And also, creates a list of [NodeId] that is written  to the allNodesRef IORef  Before the  function is forked, C each node must receive a Booted payload from the master process. R This indicates that all nodes have sent an ALIVE payload to the master process. 2Writing into an MVar the connection that has been / made with the remote node, to be written into  the connectionLookupRef IORef action' is only executed once the master node 6 has receve NT.ConnectionOpened from all other nodes ?      2     None> !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXY -./02:<>@AGHIJLMNO/ ! "#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYNone Z[\]^_`abcdef [\]^_`bcdef Z[\]^_`abcdefNoneghijklmnopqrstuvwxyz{ gklmnopquvghijklmnopqrstuvwxyz{None   is type constructor of kind *->* and an instance of classes  | and }.   8 is defined in terms of a parametric continuation monad   by plugging in g), the state monad of the runtime system.  Since neither  nor g are exported,   can be considered  abstract. A GIVar (short for global- IVar) is a globally unique handle referring  to an IVar. 6 Unlike IVars, GIVars can be compared and serialised. 7 They can also be written to remotely by the operation $. *An IVar is a write-once one place buffer. B IVars are abstract; they can be accessed and manipulated only by  the operations ,  , !, " and #. A : identifies a node (that is, an OS process running HdpH).  A 6 should be thought of as an abstract identifier which  instantiates the classes Q, R, S, T and U. EStatic declaration of Static deserialisers used in explicit Closures % created or imported by this module. K This Static declaration must be imported by every main module using HdpH. F The imported Static declaration must be combined with the main module's own K Static declaration and registered; failure to do so may abort the program  at runtime. BReturns the node hosting the IVar referred to by the given GIVar. K This function being pure implies that IVars cannot migrate between nodes. ~Eliminate the g monad down to  by running the given action;  aspects of the RTS'8s behaviour are controlled by the respective parameters  in the conf argument. ,Return True iff this node is the root node. Initiate RTS shutdown. Eliminate the   monad by converting the given   action p  into an g; action (to be executed on any one node of the distributed  runtime system). Eliminates the  3 monad by executing the given parallel computation p, D including setting up and initialising a distributed runtime system * according to the configuration parameter conf. - This function lives in the IO monad because p may be impure,  for instance, p may exhibit non-determinism.  Caveat: Though the computation p' will only be started on a single root  node, ; must be executed on every node of the distributed runtime ' system du to the SPMD nature of HdpH. ' Note that the configuration parameter conf! applies to all nodes uniformly; B at present there is no support for heterogeneous configurations. Convenience: variant of  which does return a result. G Caveat: The result is only returned on the root node; all other nodes  return . Terminates the current thread. :Returns the node this operation is currently executed on. >Returns a list of all nodes currently forming the distributed  runtime system. 'Lifts an IO action into the Par monad. 1Evaluates its argument to weak head normal form. 5Evaluates its argument to normal form (as defined by T instance). :Creates a new thread, to be executed on the current node. 4Creates a spark, to be available for work stealing. J The spark may be converted into a thread and executed locally, or it may / be stolen by another node and executed there. FPushes a computation to the given node, where it is eagerly converted  into a thread and executed. Creates a new empty IVar. :Writes to given IVar (without forcing the value written). 4Reads from given IVar; blocks if the IVar is empty. !2Reads from given IVar; does not block but returns  if IVar empty. ";Tests whether given IVar is empty or full; does not block. #;Globalises given IVar, returning a globally unique handle; * this operation is restricted to IVars of  type. $AWrites to (possibly remote) IVar denoted by given global handle; 2 this operation is restricted to write valueso of  type. $ ~ !"#$B  !"#$  !"#$! ~ !"#$None*%A % is almost a ).  More precisely, a % for type a is a delayed (semantic)  identity function in the   monad, ie. it returns an  (rather  than a term) of type a. &Type synonym for declaring the  deserialisers required by  '' instances; see the tutorial in module  " for a more thorough explanation. '.Indexing class, recording which types support 0 ; see the  tutorial in module  for a more thorough  explanation. (Only method of class ForceCC , recording the source location  where an instance of ForceCC is declared. )A ) for type a! is a (semantic) identity in the   monad. ; For an elaboration of this concept (in the context of the Eval monad)  see the paper:  Marlow et al.  4Seq no more: Better Strategies for parallel Haskell.  Haskell 2010. +3Strategy application is actual application (in the   monad). , Do Nothing strategy. -Evaluate head-strict- strategy; probably not very useful in HdpH. .Evaluate fully strategy. /forceC is the fully forcing # strategy, ie. it fully normalises  the thunk inside an explicit .  Importantly, forceC alters the serialisable  represention * so that serialisation will not force the  again. 0forceCC is a - wrapping the fully forcing Closure strategy  /; see the tutorial in module  for " details on the implementation of forceCC. 1 deserialiser required by a ' instance; see the tutorial  in module " for a more thorough explanation. 2sparkClosure clo_strat is a % that sparks a ;  evaluation of the sparked  is governed by the strategy   clo_strat. 3pushClosure clo_strat n is a % that pushes a  ( to be executed in a new thread on node n;  evaluation of the pushed  is governed by the strategy   clo_strat. 4AEvaluate each element of a list according to the given strategy. 5Specialisation of 4% to a list of Closures (wrapped in a 6 Closure). Useful for building clustering strategies. 6EEvaluate each element of a list of Closures in parallel according to C the given strategy (wrapped in a Closure). Work is distributed by  lazy work stealing. 7EEvaluate each element of a list of Closures in parallel according to G the given strategy (wrapped in a Closure). Work is pushed round-robin  to the given list of nodes. 8EEvaluate each element of a list of Closures in parallel according to D the given strategy (wrapped in a Closure). Work is pushed randomly  to the given list of nodes. 9)parClosureListClusterBy cluster uncluster is a generic parallel B clustering strategy combinator for lists of Closures, evaluating  clusters generated by cluster in parallel. 1 Clusters are distributed by lazy work stealing.  The function  uncluster must be a  left inverse of cluster,  that is uncluster . cluster must be the identity. :parClosureListChunked n evaluates chunks of size n of a list of N Closures in parallel according to the given strategy (wrapped in a Closure). / Chunks are distributed by lazy work stealing. ! For instance, dividing the list [c1,c2,c3,c4,c5] into chunks of size 3 ) results in the following list of chunks  [[c1,c2,c3], [c4,c5]]. ;parClosureListSliced n evaluates n! slices of a list of Closures in B parallel according to the given strategy (wrapped in a Closure). / Slices are distributed by lazy work stealing. ! For instance, dividing the list [c1,c2,c3,c4,c5] into 3 slices ) results in the following list of slices [[c1,c4], [c2,c5], [c3]]. <CTask farm, evaluates tasks (function Closure applied to an element M of the input list) in parallel and according to the given strategy (wrapped  in a Closure).  Note that parMap8 should only be used if the terms in the input list are G already in normal form, as they may be forced sequentially otherwise. =Specialisation of <( to the fully forcing Closure strategy.  That is, parMapNF8 forces every element of the output list to normalform. >EChunking task farm, divides the input list into chunks of given size M and evaluates tasks (function Closure mapped on a chunk of the input list) I in parallel and according to the given strategy (wrapped in a Closure).   parMapChunked4 should only be used if the terms in the input list  are already in normal form. ?Specialisation of >( to the fully forcing Closure strategy. @FSlicing task farm, divides the input list into given number of slices M and evaluates tasks (function Closure mapped on a slice of the input list) I in parallel and according to the given strategy (wrapped in a Closure).   parMapSliced4 should only be used if the terms in the input list  are already in normal form. ASpecialisation of @( to the fully forcing Closure strategy. B1Monadic task farm for Closures, evaluates tasks ( -monadic function > Closure applied to a Closure of the input list) in parallel. 9 Note the absence of a strategy argument; strategies aren't needed because 6 they can be baked into the monadic function Closure. C$Monadic task farm, evaluates tasks ( -monadic function Closure 7 applied to an element of the input list) in parallel. 9 Note the absence of a strategy argument; strategies aren't needed because 6 they can be baked into the monadic function Closure.  parMap@ should only be used if the terms in the input list are already ? in normal form, as they may be forced sequentially otherwise. DSpecialisation of C, not returning any result. ETask farm like <+ but pushes tasks in a round-robin fashion  to the given list of nodes. FTask farm like =+ but pushes tasks in a round-robin fashion  to the given list of nodes. G$Monadic task farm for Closures like B but pushes tasks 6 in a round-robin fashion to the given list of nodes. HMonadic task farm like C but pushes tasks 6 in a round-robin fashion to the given list of nodes. IMonadic task farm like D but pushes tasks 6 in a round-robin fashion to the given list of nodes. J$Monadic task farm for Closures like B / but pushes to random nodes on the given list. KMonadic task farm like C / but pushes to random nodes on the given list. LMonadic task farm like D / but pushes to random nodes on the given list. M(Sequential divide-and-conquer skeleton.  /didvideAndConquer trivial decompose combine f x repeatedly decomposes  the problem x until trivial, applies f to the trivial sub-problems  and combines the solutions. NBParallel divide-and-conquer skeleton with lazy work distribution.  AparDivideAndConquer trivial_clo decompose_clo combine_clo f_clo x follows # the divide-and-conquer pattern of M except that, for 0 technical reasons, all arguments are Closures. OJParallel divide-and-conquer skeleton with eager random work distribution, * pushing work to the given list of nodes.  HpushDivideAndConquer nodes trivial_clo decompose_clo combine_clo f_clo x + follows the divide-and-conquer pattern of M except that, 4 for technical reasons, all arguments are Closures. :%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNO+%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNO+)+,-./0'(&1%23456789:;<=>?@ABCDEFGHIJKLMNO*9%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNO !"#$%&'()*+,-./0123456789:;#<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`"abcabdefghijklmnopqq0rstuuvvwxyz{|}~ ( )       -       $  !)(      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTU VW/XY0Z[\]^_``abcY0defghijklmneopeoqrastuvwexyz{| !"}~zzzzzzzzzzz{z{z{z{z{z{z{z{z{z{z{z{z{zzzzz hdph-0.0.1Control.Parallel.HdpH.ConfControl.Parallel.HdpH Control.Parallel.HdpH.Strategies,Control.Parallel.HdpH.Internal.Type.Location'Control.Parallel.HdpH.Internal.Data.Sem)Control.Parallel.HdpH.Internal.Data.Deque-Control.Parallel.HdpH.Internal.State.Location'Control.Parallel.HdpH.Internal.Type.Par'Control.Parallel.HdpH.Internal.Location#Control.Parallel.HdpH.Internal.Misc(Control.Parallel.HdpH.Internal.Type.GRef)Control.Parallel.HdpH.Internal.State.GRef#Control.Parallel.HdpH.Internal.GRef#Control.Parallel.HdpH.Internal.IVar#Control.Parallel.HdpH.Internal.Comm(Control.Parallel.HdpH.Internal.Sparkpool)Control.Parallel.HdpH.Internal.Threadpool(Control.Parallel.HdpH.Internal.SchedulerRTSConfdebugLvlscheds wakeupDlymaxHopsmaxFishminSched minFishDly maxFishDlynumProcsnetworkInterfacedefaultRTSConfParGIVarIVarNodeId declareStaticat runParIO_runParIOdonemyNodeallNodesioevalforceforksparkpushTonewputgettryGetprobeglobrput ProtoStrategy StaticForceCCForceCC locForceCCStrategyusingr0rseqrdeepseqforceCforceCC staticForceCC sparkClosure pushClosureevalListevalClosureListClosureparClosureListpushClosureListpushRandClosureListparClosureListClusterByparClosureListChunkedparClosureListSlicedparMapparMapNF parMapChunkedparMapChunkedNF parMapSlicedparMapSlicedNFparClosureMapMparMapMparMapM_pushMap pushMapNFpushClosureMapMpushMapM pushMapM_pushRandClosureMapM pushRandMapM pushRandMapM_divideAndConquerparDivideAndConquerpushDivideAndConquerghc-prim GHC.ClassesEqOrdbaseGHC.ShowShowdeepseq-1.3.0.1Control.DeepSeqNFDatacereal-0.3.5.2Data.Serialize SerializeMyNodeException NodeIdUnset$fExceptionMyNodeExceptionSemwaitsignalsignalPeriodicallyDequeIODequeqmxemptyfromList pushFrontpushBackpopFrontpopBackfirstlastnulllength maxLengthemptyIO fromListIO pushFrontIO pushBackIO popFrontIO popBackIOfirstIOlastIOnullIOlengthIO maxLengthIOswap myNodeRef allNodesRefdebugRefSparkThreadAtomParMunPar $fMonadParM $fFunctorParMmyNode'errordebugdbgNonedbgStats dbgStaticTabdbgSpark dbgMsgSend dbgMsgRcvddbgGIVardbgIVardbgGRef dbgFailure ActionServerActionContrunContForkableforkOnAnyTypeAnyrotateencodedecode encodeLazy decodeLazy decodeBytes encodeBytes showPrefixshowPrefixLazy showListUpto splitAtFirstfromLeft fromRightrmElems'rmElems newServer killServer reqActionservertimeIO $fMonadCont $fFunctorCont$fForkableReaderT $fForkableIOGRefReglastSlottableGRefslotregRefmkGRefisLocalisLive globalisefreefreeNow createEntry deleteEntrywithGRef$fSerializeGRef $fNFDataGRef $fShowGRef $fOrdGRef$fEqGRef IVarContentBlockedFullnewIVarputIVargetIVarpollIVar probeIVar hostGIVarglobIVarputGIVar waitShutdownshutdownTransportshutdownTransportIOkillConnectionsshutdownuncleanShutdownremoteEndPointAddrMapwaitForBootstrapConfirmation receiveServerconnectToAllNodes recvAllReady SlaveNodesMainNodeMsgPayloadReadyBootedShutdownStartupCommMMessageMessageQStates_confs_nodes s_allNodess_myNodes_isMains_msgQ s_shutdownliftIOnodesisMainmsgQrun_sendsend_ broadcastMsgreceiverecvtryCreateTransport createTransnodeInfo discoverMyIPmyIPbroadcastMyNode findSlaveslistenForNodesgenRandomSocket myEndPoint myEndPointRefconnectionLookupconnectionLookupRefmainEndpointAddrmainEndpointAddrRef lclTransportlclTransportRef$fSerializeMsg $fNFDataMsgPUSHSCHEDULENOWORKFISHs_pool s_sparkOrig s_fishings_noWork s_idleScheds s_fishSent s_sparkRcvd s_sparkGen s_sparkConvSparkMrun liftCommMgetPool readPoolSizegetSparkOrigHistreadSparkOrigHistupdateSparkOrigHistgetFishingFlaggetNoWorkServergetIdleSchedsSemgetFishSentCtrreadFishSentCtrgetSparkRcvdCtrreadSparkRcvdCtrgetSparkGenCtrreadSparkGenCtrgetSparkConvCtrreadSparkConvCtrreadMaxSparkCtr getMaxHops getMaxFish getMinSched getMinFishDly getMaxFishDly blockSched wakeupSchedgetSparkputSparksendFISHdispatch handleFISHhandleSCHEDULE handleNOWORKreadFlagsetFlag clearFlagreadCtrincCtrrandomOtherElem $fShowMsgThreadM forkThreadM liftSparkMgetPoolspoolIDreadMaxThreadCtrs stealThread putThread putThreadsRTSunRTSforkRTS liftThreadM schedulerID execThread schedulerscheduleThread getThreadmkThreadsendPUSH handlePUSH execSparkhandlerprintFinalStatsGHC.BaseFunctorMonadrunRTS_ GHC.TypesIO isMainRTS shutdownRTSrunPar Data.MaybeNothinghdph-closure-0.0.1&Control.Parallel.HdpH.Closure.InternalClosureprintStaticTableatomrput_abs $fShowGIVar $fShowNodeIdControl.Parallel.HdpH.ClosureapCcompCtermCidC forceClosurestaticToClosure toClosure locToClosure ToClosureStaticToClosure staticLoc_ staticLocstatic_static mkClosureLoc mkClosureunsafeMkClosure unClosure decodeEnv encodeEnvhereLocTEnv$Control.Parallel.HdpH.Closure.StaticshowStaticTableregisterdeclare)Control.Parallel.HdpH.Closure.Static.TypeStatic StaticDeclsparkClosure_abspushClosure_abs evalClusterBychunkunchunksliceunsliceparClosureMapM_abs parMapM_abstermParCconstReturnUnitparDivideAndConquer_abspushDivideAndConquer_abs$fForceCCClosure $fToClosure[]