úÎTÝQ¼"      !None"#"#"#None *079;IN#The main configuration for workers.*Each pool of workers should have a unique ¬, as the queues are set up by that name, and if you have different types of data written in, they will likely be unable to be deserialized (and thus could end up in the  queue).The ƒ defaults to writing to stdout, so you will likely want to replace that with something appropriate (like from a logging package).The Z is really important. It determines the length of time after a job is started before the ÿ9 will decide that the job must have died and will restart it. If it is shorter than the length of time that a normal job takes to complete, the jobs _will_ be run multiple times. This is _semantically_ okay, as this is an at-least-once processor, but obviously won't be desirable. It defaults to 120 seconds.The A controls what happens when an exception is thrown within a job. controls how many  ) jobs will be kept. It defaults to 1000. ÿ$When configuring a worker, you can tell it to use an existing redis connection pool (which you may have for the rest of your application). Otherwise, you can specify connection info. By default, hworker tries to connect to localhost, which may not be true for your production application. DThe worker data type - it is parametrized be the worker state (the s) and the job type (the t).dWhat should happen when an unexpected exception is thrown in a job - it can be treated as either a  (the default) or a K (if you know the only exceptions are triggered by intermittent problems).SEach Worker that you create will be responsible for one type of job, defined by a  instance.”The job can do many different things (as the value can be a variant), but be careful not to break deserialization if you add new things it can do.,The job will take some state (passed as the sp parameter), which does not vary based on the job, and the actual job data structure. The data structure (the tà parameter) will be stored and copied a few times in Redis while in the lifecycle, so generally it is a good idea for it to be relatively small (and have it be able to look up data that it needs while the job in running).ÿ6Finally, while deriving FromJSON and ToJSON instances automatically might seem like a good idea, you will most likely be better off defining them manually, so you can make sure they are backwards compatible if you change them, as any jobs that can't be deserialized will not be run (and will end up in the ¶ queue). This will only happen if the queue is non-empty when you replce the running application version, but this is obviously possible and could be likely depending on your use.Jobs can return ,  (with a message), or % (with a message). Jobs that return  are stored in the  , queue and are not re-run. Jobs that return  are re-run.ZThe default worker config - it needs a name and a state (as those will always be unique).%Create a new worker with the default .'Note that you must create at least one  and S for the queue to actually process jobs (and for it to retry ones that time-out).%Create a new worker with a specified .'Note that you must create at least one  and S for the queue to actually process jobs (and for it to retry ones that time-out).NDestroy a worker. This will delete all the queues, clearing out all existing , the  and  b queues. There is no need to do this in normal applications (and most likely, you won't want to).AAdds a job to the queue. Returns whether the operation succeeded.DCreates a new worker thread. This is blocking, so you will want to $Ž this into a thread. You can have any number of these (and on any number of servers); the more there are, the faster jobs will be processed.Start a monitor. Like ÿ%, this is blocking, so should be started in a thread. This is responsible for retrying jobs that time out (which can happen if the processing thread is killed, for example). You need to have at least one of these running to have the retry happen, but it is safe to have any number running.VReturns the jobs that could not be deserialized, most likely because you changed the 'ToJSON'/'FromJSON'Å instances for you job in a way that resulted in old jobs not being able to be converted back from json. Another reason for jobs to end up here (and much worse) is if you point two instances of  §, with different job types, at the same queue (ie, you re-use the name). Then anytime a worker from one queue gets a job from the other it would think it is broken.Returns all pending jobs. <Returns all failed jobs. This is capped at the most recent hworkerconfigFailedQueueSize jobs that returned  (or threw an exception when hworkerconfigExceptionBehavior is ).!?Logs the contents of the jobqueue and the inprogress queue at  microseconds intervals.: %&'()*+,-./012345 !6789:;<"  !"    !   %&'()*+,-./012345 !6789:;<=       !"#$%&'()*+,-./00123456789:;<=>hwork_HeNRjZ6kUJJKzQWG8314gASystem.HworkerData.Aeson.Helpers HworkerConfig hwconfigName hwconfigStatehwconfigRedisConnectInfohwconfigExceptionBehaviorhwconfigLoggerhwconfigTimeouthwconfigFailedQueueSize hwconfigDebugRedisConnectionRedisConnectInfoHworkerExceptionBehaviorRetryOnExceptionFailOnExceptionJobjobResultSuccessRetryFailuredefaultHworkerConfigcreate createWithdestroyqueueworkermonitorbrokenjobsfaileddebugger decodeWith decodeValuebase GHC.Conc.SyncforkIO hworkerName hworkerStatehworkerConnectionhworkerExceptionBehavior hworkerLoggerhworkerJobTimeouthworkerFailedQueueSize hworkerDebugJobDatahwlogjobQueue progressQueue brokenQueue failedQueue jobsFromQueuewithList withMaybewithNilwithInt withIgnore$fFromJSONResult$fToJSONResult