base-4.13.0.0: Basic libraries
Copyright(c) The University of Glasgow 1994-2002
Licensesee libraries/base/LICENSE
Maintainercvs-ghc@haskell.org
Stabilityinternal
Portabilitynon-portable (GHC extensions)
Safe HaskellUnsafe
LanguageHaskell2010

GHC.Conc

Description

Basic concurrency stuff.

Synopsis

Documentation

data ThreadId Source #

A ThreadId is an abstract type representing a handle to a thread. ThreadId is an instance of Eq, Ord and Show, where the Ord instance implements an arbitrary total ordering over ThreadIds. The Show instance lets you convert an arbitrary-valued ThreadId to string form; showing a ThreadId value is occasionally useful when debugging or diagnosing the behaviour of a concurrent program.

Note: in GHC, if you have a ThreadId, you essentially have a pointer to the thread itself. This means the thread itself can't be garbage collected until you drop the ThreadId. This misfeature will hopefully be corrected at a later date.

Constructors

ThreadId ThreadId# 

Instances

Instances details
Eq ThreadId Source #

Since: 4.2.0.0

Instance details

Defined in GHC.Conc.Sync

Ord ThreadId Source #

Since: 4.2.0.0

Instance details

Defined in GHC.Conc.Sync

Show ThreadId Source #

Since: 4.2.0.0

Instance details

Defined in GHC.Conc.Sync

Forking and suchlike

forkIO :: IO () -> IO ThreadId Source #

Creates a new thread to run the IO computation passed as the first argument, and returns the ThreadId of the newly created thread.

The new thread will be a lightweight, unbound thread. Foreign calls made by this thread are not guaranteed to be made by any particular OS thread; if you need foreign calls to be made by a particular OS thread, then use forkOS instead.

The new thread inherits the masked state of the parent (see mask).

The newly created thread has an exception handler that discards the exceptions BlockedIndefinitelyOnMVar, BlockedIndefinitelyOnSTM, and ThreadKilled, and passes all other exceptions to the uncaught exception handler.

forkIOWithUnmask :: ((forall a. IO a -> IO a) -> IO ()) -> IO ThreadId Source #

Like forkIO, but the child thread is passed a function that can be used to unmask asynchronous exceptions. This function is typically used in the following way

 ... mask_ $ forkIOWithUnmask $ \unmask ->
                catch (unmask ...) handler

so that the exception handler in the child thread is established with asynchronous exceptions masked, meanwhile the main body of the child thread is executed in the unmasked state.

Note that the unmask function passed to the child thread should only be used in that thread; the behaviour is undefined if it is invoked in a different thread.

Since: 4.4.0.0

forkOn :: Int -> IO () -> IO ThreadId Source #

Like forkIO, but lets you specify on which capability the thread should run. Unlike a forkIO thread, a thread created by forkOn will stay on the same capability for its entire lifetime (forkIO threads can migrate between capabilities according to the scheduling policy). forkOn is useful for overriding the scheduling policy when you know in advance how best to distribute the threads.

The Int argument specifies a capability number (see getNumCapabilities). Typically capabilities correspond to physical processors, but the exact behaviour is implementation-dependent. The value passed to forkOn is interpreted modulo the total number of capabilities as returned by getNumCapabilities.

GHC note: the number of capabilities is specified by the +RTS -N option when the program is started. Capabilities can be fixed to actual processor cores with +RTS -qa if the underlying operating system supports that, although in practice this is usually unnecessary (and may actually degrade performance in some cases - experimentation is recommended).

Since: 4.4.0.0

forkOnWithUnmask :: Int -> ((forall a. IO a -> IO a) -> IO ()) -> IO ThreadId Source #

Like forkIOWithUnmask, but the child thread is pinned to the given CPU, as with forkOn.

Since: 4.4.0.0

numCapabilities :: Int Source #

the value passed to the +RTS -N flag. This is the number of Haskell threads that can run truly simultaneously at any given time, and is typically set to the number of physical processor cores on the machine.

Strictly speaking it is better to use getNumCapabilities, because the number of capabilities might vary at runtime.

getNumCapabilities :: IO Int Source #

Returns the number of Haskell threads that can run truly simultaneously (on separate physical processors) at any given time. To change this value, use setNumCapabilities.

Since: 4.4.0.0

setNumCapabilities :: Int -> IO () Source #

Set the number of Haskell threads that can run truly simultaneously (on separate physical processors) at any given time. The number passed to forkOn is interpreted modulo this value. The initial value is given by the +RTS -N runtime flag.

This is also the number of threads that will participate in parallel garbage collection. It is strongly recommended that the number of capabilities is not set larger than the number of physical processor cores, and it may often be beneficial to leave one or more cores free to avoid contention with other processes in the machine.

Since: 4.5.0.0

getNumProcessors :: IO Int Source #

Returns the number of CPUs that the machine has

Since: 4.5.0.0

numSparks :: IO Int Source #

Returns the number of sparks currently in the local spark pool

myThreadId :: IO ThreadId Source #

Returns the ThreadId of the calling thread (GHC only).

killThread :: ThreadId -> IO () Source #

killThread raises the ThreadKilled exception in the given thread (GHC only).

killThread tid = throwTo tid ThreadKilled

throwTo :: Exception e => ThreadId -> e -> IO () Source #

throwTo raises an arbitrary exception in the target thread (GHC only).

Exception delivery synchronizes between the source and target thread: throwTo does not return until the exception has been raised in the target thread. The calling thread can thus be certain that the target thread has received the exception. Exception delivery is also atomic with respect to other exceptions. Atomicity is a useful property to have when dealing with race conditions: e.g. if there are two threads that can kill each other, it is guaranteed that only one of the threads will get to kill the other.

Whatever work the target thread was doing when the exception was raised is not lost: the computation is suspended until required by another thread.

If the target thread is currently making a foreign call, then the exception will not be raised (and hence throwTo will not return) until the call has completed. This is the case regardless of whether the call is inside a mask or not. However, in GHC a foreign call can be annotated as interruptible, in which case a throwTo will cause the RTS to attempt to cause the call to return; see the GHC documentation for more details.

Important note: the behaviour of throwTo differs from that described in the paper "Asynchronous exceptions in Haskell" (http://research.microsoft.com/~simonpj/Papers/asynch-exns.htm). In the paper, throwTo is non-blocking; but the library implementation adopts a more synchronous design in which throwTo does not return until the exception is received by the target thread. The trade-off is discussed in Section 9 of the paper. Like any blocking operation, throwTo is therefore interruptible (see Section 5.3 of the paper). Unlike other interruptible operations, however, throwTo is always interruptible, even if it does not actually block.

There is no guarantee that the exception will be delivered promptly, although the runtime will endeavour to ensure that arbitrary delays don't occur. In GHC, an exception can only be raised when a thread reaches a safe point, where a safe point is where memory allocation occurs. Some loops do not perform any memory allocation inside the loop and therefore cannot be interrupted by a throwTo.

If the target of throwTo is the calling thread, then the behaviour is the same as throwIO, except that the exception is thrown as an asynchronous exception. This means that if there is an enclosing pure computation, which would be the case if the current IO operation is inside unsafePerformIO or unsafeInterleaveIO, that computation is not permanently replaced by the exception, but is suspended as if it had received an asynchronous exception.

Note that if throwTo is called with the current thread as the target, the exception will be thrown even if the thread is currently inside mask or uninterruptibleMask.

par :: a -> b -> b infixr 0 Source #

pseq :: a -> b -> b infixr 0 Source #

runSparks :: IO () Source #

Internal function used by the RTS to run sparks.

yield :: IO () Source #

The yield action allows (forces, in a co-operative multitasking implementation) a context-switch to any other currently runnable threads (if any), and is occasionally useful when implementing concurrency abstractions.

labelThread :: ThreadId -> String -> IO () Source #

labelThread stores a string as identifier for this thread if you built a RTS with debugging support. This identifier will be used in the debugging output to make distinction of different threads easier (otherwise you only have the thread state object's address in the heap).

Other applications like the graphical Concurrent Haskell Debugger (http://www.informatik.uni-kiel.de/~fhu/chd/) may choose to overload labelThread for their purposes as well.

mkWeakThreadId :: ThreadId -> IO (Weak ThreadId) Source #

Make a weak pointer to a ThreadId. It can be important to do this if you want to hold a reference to a ThreadId while still allowing the thread to receive the BlockedIndefinitely family of exceptions (e.g. BlockedIndefinitelyOnMVar). Holding a normal ThreadId reference will prevent the delivery of BlockedIndefinitely exceptions because the reference could be used as the target of throwTo at any time, which would unblock the thread.

Holding a Weak ThreadId, on the other hand, will not prevent the thread from receiving BlockedIndefinitely exceptions. It is still possible to throw an exception to a Weak ThreadId, but the caller must use deRefWeak first to determine whether the thread still exists.

Since: 4.6.0.0

data ThreadStatus Source #

The current status of a thread

Constructors

ThreadRunning

the thread is currently runnable or running

ThreadFinished

the thread has finished

ThreadBlocked BlockReason

the thread is blocked on some resource

ThreadDied

the thread received an uncaught exception

data BlockReason Source #

Constructors

BlockedOnMVar

blocked on MVar

BlockedOnBlackHole

blocked on a computation in progress by another thread

BlockedOnException

blocked in throwTo

BlockedOnSTM

blocked in retry in an STM transaction

BlockedOnForeignCall

currently in a foreign call

BlockedOnOther

blocked on some other resource. Without -threaded, I/O and threadDelay show up as BlockedOnOther, with -threaded they show up as BlockedOnMVar.

Instances

Instances details
Eq BlockReason Source #

Since: 4.3.0.0

Instance details

Defined in GHC.Conc.Sync

Ord BlockReason Source #

Since: 4.3.0.0

Instance details

Defined in GHC.Conc.Sync

Show BlockReason Source #

Since: 4.3.0.0

Instance details

Defined in GHC.Conc.Sync

threadCapability :: ThreadId -> IO (Int, Bool) Source #

Returns the number of the capability on which the thread is currently running, and a boolean indicating whether the thread is locked to that capability or not. A thread is locked to a capability if it was created with forkOn.

Since: 4.4.0.0

newStablePtrPrimMVar :: MVar () -> IO (StablePtr PrimMVar) Source #

Make a StablePtr that can be passed to the C function hs_try_putmvar(). The RTS wants a StablePtr to the underlying MVar#, but a StablePtr# can only refer to lifted types, so we have to cheat by coercing.

Waiting

threadDelay :: Int -> IO () Source #

Suspends the current thread for a given number of microseconds (GHC only).

There is no guarantee that the thread will be rescheduled promptly when the delay has expired, but the thread will never continue to run earlier than specified.

registerDelay :: Int -> IO (TVar Bool) Source #

Switch the value of returned TVar from initial value False to True after a given number of microseconds. The caveats associated with threadDelay also apply.

threadWaitRead :: Fd -> IO () Source #

Block the current thread until data is available to read on the given file descriptor (GHC only).

This will throw an IOError if the file descriptor was closed while this thread was blocked. To safely close a file descriptor that has been used with threadWaitRead, use closeFdWith.

threadWaitWrite :: Fd -> IO () Source #

Block the current thread until data can be written to the given file descriptor (GHC only).

This will throw an IOError if the file descriptor was closed while this thread was blocked. To safely close a file descriptor that has been used with threadWaitWrite, use closeFdWith.

threadWaitReadSTM :: Fd -> IO (STM (), IO ()) Source #

Returns an STM action that can be used to wait for data to read from a file descriptor. The second returned value is an IO action that can be used to deregister interest in the file descriptor.

threadWaitWriteSTM :: Fd -> IO (STM (), IO ()) Source #

Returns an STM action that can be used to wait until data can be written to a file descriptor. The second returned value is an IO action that can be used to deregister interest in the file descriptor.

closeFdWith Source #

Arguments

:: (Fd -> IO ())

Low-level action that performs the real close.

-> Fd

File descriptor to close.

-> IO () 

Close a file descriptor in a concurrency-safe way (GHC only). If you are using threadWaitRead or threadWaitWrite to perform blocking I/O, you must use this function to close file descriptors, or blocked threads may not be woken.

Any threads that are blocked on the file descriptor via threadWaitRead or threadWaitWrite will be unblocked by having IO exceptions thrown.

Allocation counter and limit

setAllocationCounter :: Int64 -> IO () Source #

Every thread has an allocation counter that tracks how much memory has been allocated by the thread. The counter is initialized to zero, and setAllocationCounter sets the current value. The allocation counter counts *down*, so in the absence of a call to setAllocationCounter its value is the negation of the number of bytes of memory allocated by the thread.

There are two things that you can do with this counter:

Allocation accounting is accurate only to about 4Kbytes.

Since: 4.8.0.0

getAllocationCounter :: IO Int64 Source #

Return the current value of the allocation counter for the current thread.

Since: 4.8.0.0

enableAllocationLimit :: IO () Source #

Enables the allocation counter to be treated as a limit for the current thread. When the allocation limit is enabled, if the allocation counter counts down below zero, the thread will be sent the AllocationLimitExceeded asynchronous exception. When this happens, the counter is reinitialised (by default to 100K, but tunable with the +RTS -xq option) so that it can handle the exception and perform any necessary clean up. If it exhausts this additional allowance, another AllocationLimitExceeded exception is sent, and so forth. Like other asynchronous exceptions, the AllocationLimitExceeded exception is deferred while the thread is inside mask or an exception handler in catch.

Note that memory allocation is unrelated to live memory, also known as heap residency. A thread can allocate a large amount of memory and retain anything between none and all of it. It is better to think of the allocation limit as a limit on CPU time, rather than a limit on memory.

Compared to using timeouts, allocation limits don't count time spent blocked or in foreign calls.

Since: 4.8.0.0

disableAllocationLimit :: IO () Source #

Disable allocation limit processing for the current thread.

Since: 4.8.0.0

TVars

newtype STM a Source #

A monad supporting atomic memory transactions.

Constructors

STM (State# RealWorld -> (# State# RealWorld, a #)) 

Instances

Instances details
Monad STM Source #

Since: 4.3.0.0

Instance details

Defined in GHC.Conc.Sync

Methods

(>>=) :: STM a -> (a -> STM b) -> STM b Source #

(>>) :: STM a -> STM b -> STM b Source #

return :: a -> STM a Source #

Functor STM Source #

Since: 4.3.0.0

Instance details

Defined in GHC.Conc.Sync

Methods

fmap :: (a -> b) -> STM a -> STM b Source #

(<$) :: a -> STM b -> STM a Source #

Applicative STM Source #

Since: 4.8.0.0

Instance details

Defined in GHC.Conc.Sync

Methods

pure :: a -> STM a Source #

(<*>) :: STM (a -> b) -> STM a -> STM b Source #

liftA2 :: (a -> b -> c) -> STM a -> STM b -> STM c Source #

(*>) :: STM a -> STM b -> STM b Source #

(<*) :: STM a -> STM b -> STM a Source #

MonadPlus STM Source #

Since: 4.3.0.0

Instance details

Defined in GHC.Conc.Sync

Methods

mzero :: STM a Source #

mplus :: STM a -> STM a -> STM a Source #

Alternative STM Source #

Since: 4.8.0.0

Instance details

Defined in GHC.Conc.Sync

Methods

empty :: STM a Source #

(<|>) :: STM a -> STM a -> STM a Source #

some :: STM a -> STM [a] Source #

many :: STM a -> STM [a] Source #

atomically :: STM a -> IO a Source #

Perform a series of STM actions atomically.

Using atomically inside an unsafePerformIO or unsafeInterleaveIO subverts some of guarantees that STM provides. It makes it possible to run a transaction inside of another transaction, depending on when the thunk is evaluated. If a nested transaction is attempted, an exception is thrown by the runtime. It is possible to safely use atomically inside unsafePerformIO or unsafeInterleaveIO, but the typechecker does not rule out programs that may attempt nested transactions, meaning that the programmer must take special care to prevent these.

However, there are functions for creating transactional variables that can always be safely called in unsafePerformIO. See: newTVarIO, newTChanIO, newBroadcastTChanIO, newTQueueIO, newTBQueueIO, and newTMVarIO.

Using unsafePerformIO inside of atomically is also dangerous but for different reasons. See unsafeIOToSTM for more on this.

retry :: STM a Source #

Retry execution of the current memory transaction because it has seen values in TVars which mean that it should not continue (e.g. the TVars represent a shared buffer that is now empty). The implementation may block the thread until one of the TVars that it has read from has been updated. (GHC only)

orElse :: STM a -> STM a -> STM a Source #

Compose two alternative STM actions (GHC only).

If the first action completes without retrying then it forms the result of the orElse. Otherwise, if the first action retries, then the second action is tried in its place. If both actions retry then the orElse as a whole retries.

throwSTM :: Exception e => e -> STM a Source #

A variant of throw that can only be used within the STM monad.

Throwing an exception in STM aborts the transaction and propagates the exception. If the exception is caught via catchSTM, only the changes enclosed by the catch are rolled back; changes made outside of catchSTM persist.

If the exception is not caught inside of the STM, it is re-thrown by atomically, and the entire STM is rolled back.

Although throwSTM has a type that is an instance of the type of throw, the two functions are subtly different:

throw e    `seq` x  ===> throw e
throwSTM e `seq` x  ===> x

The first example will cause the exception e to be raised, whereas the second one won't. In fact, throwSTM will only cause an exception to be raised when it is used within the STM monad. The throwSTM variant should be used in preference to throw to raise an exception within the STM monad because it guarantees ordering with respect to other STM operations, whereas throw does not.

catchSTM :: Exception e => STM a -> (e -> STM a) -> STM a Source #

Exception handling within STM actions.

catchSTM m f catches any exception thrown by m using throwSTM, using the function f to handle the exception. If an exception is thrown, any changes made by m are rolled back, but changes prior to m persist.

data TVar a Source #

Shared memory locations that support atomic memory transactions.

Constructors

TVar (TVar# RealWorld a) 

Instances

Instances details
Eq (TVar a) Source #

Since: 4.8.0.0

Instance details

Defined in GHC.Conc.Sync

Methods

(==) :: TVar a -> TVar a -> Bool #

(/=) :: TVar a -> TVar a -> Bool #

newTVar :: a -> STM (TVar a) Source #

Create a new TVar holding a value supplied

newTVarIO :: a -> IO (TVar a) Source #

IO version of newTVar. This is useful for creating top-level TVars using unsafePerformIO, because using atomically inside unsafePerformIO isn't possible.

readTVar :: TVar a -> STM a Source #

Return the current value stored in a TVar.

readTVarIO :: TVar a -> IO a Source #

Return the current value stored in a TVar. This is equivalent to

 readTVarIO = atomically . readTVar

but works much faster, because it doesn't perform a complete transaction, it just reads the current value of the TVar.

writeTVar :: TVar a -> a -> STM () Source #

Write the supplied value into a TVar.

unsafeIOToSTM :: IO a -> STM a Source #

Unsafely performs IO in the STM monad. Beware: this is a highly dangerous thing to do.

  • The STM implementation will often run transactions multiple times, so you need to be prepared for this if your IO has any side effects.
  • The STM implementation will abort transactions that are known to be invalid and need to be restarted. This may happen in the middle of unsafeIOToSTM, so make sure you don't acquire any resources that need releasing (exception handlers are ignored when aborting the transaction). That includes doing any IO using Handles, for example. Getting this wrong will probably lead to random deadlocks.
  • The transaction may have seen an inconsistent view of memory when the IO runs. Invariants that you expect to be true throughout your program may not be true inside a transaction, due to the way transactions are implemented. Normally this wouldn't be visible to the programmer, but using unsafeIOToSTM can expose it.

Miscellaneous

withMVar :: MVar a -> (a -> IO b) -> IO b Source #

Provide an IO action with the current value of an MVar. The MVar will be empty for the duration that the action is running.