úÎRÓGxž      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~€‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š › œ  Safe-Inferred,Simplifies the input string by interpreting \r and \bH characters specially so that the result string has the same final (or terminal¡, pun intended) appearance as would the input string when written to a terminal that overwrites character positions following carriage returns and backspaces.žThe helper function ta takes an accumulating ShowS-style function that holds  committedE lines of text, a (reversed) list of characters on the current line before@ the cursor, a (normal) list of characters on the current line after% the cursor, and the remaining input.žAn accumulating ShowS-style function that holds  committed lines of text5A (reversed) list of characters on the current line before the cursor3A (normal) list of characters on the current line after the cursorThe remaining inputž Safe-Inferred7Specifies zero or more test suites, to which the given ? is then applied. If no test suites are specified, then the  applies to all test suites.The test suites to which the 6 applies. The empty set actually means 'all suites'.The  to apply.KA tree-like structure that represents a set of tests within a given suite.SelectorLs for subgroups of this one. The entry for each path element contains the SelectorX to be used for that group (or test). An empty map actually means 'select all tests'.gTags by which to filter all tests. The empty set actually means 'run all tests regardless of tags'. ŸO means that all tests will be skipped (though this will be overridden by any Selectors in selectorInners. Combine two i fields into one. This operation represents the union of the tests that are selected by the two fields. 4Take the difference of one set of tags from another.A & that selects all tests in all suites.A  that selects all tests. Combine two s into a single .¡@Collect all the selectors from filters that apply to all suites.¢CBuild a map from suite names to the selectors that get run on them..Take a list of test suite names and a list of s, and build a £1 that says for each test suite, what (combined)  should be used to select tests.Parse a Z expression. The format for filter expressions is described in the module documentation.¤Parse content from a testlist file. The file must contain one filter per line. Leading and trailing spaces are ignored, as are lines that contain no filter. A #5 will cause the parser to skip the rest of the line.Given a ¤@, get the contents of the file and parse it as a testlist file. ¥¡¢The current filterThe map from suites to The names of all test suites. The list of s from which to build the map.¦§¨©ª«¬The name of the source. The input.­®The name of the input file.The file content. ¥¡¢¦§¨©ª«¬­® Safe-Inferred"|Report generator. This record type contains a number of functions that are called at various points throughout a test run.!&Called at the beginning of a test run." Called at the end of a test run.#(Called at the start of a test suite run.$&Called at the end of a test suite run.%'Called at the start of a test case run.&-Called to report progress of a test case run.'%Called at the end of a test case run.(!Called when skipping a test case.)<Called to report output printed to the system output stream.*;Called to report output printed to the system error stream.+Called when a test fails.,$Called when a test reports an error.-Composed into /s./iUniquely describes the location of a test within a test hierarchy. Node order is from test case to root.0¦Keeps track of the remaining tests and the results of the performed tests. As each test is performed, the path is removed and the counts are updated as appropriate.22The name of the case or suite currently being run.3.The path to the test case currently being run.4The current test statistics.5The current option values.6.The current option descriptions we know about.7WA record that holds the results of tests that have been performed up until this point.9Number of total cases.:Number of cases tried.;*Number of cases that failed with an error.<Number of cases that failed.="Number of cases that were skipped.>#Total number of assertions checked.?3Number of assertions checked by the last test case.@A 7 with all zero counts.AeA reporter containing default actions, which are to do nothing and return the user state unmodified.B‚Converts a test case path to a string, separating adjacent elements by a dot ('.'). An element of the path is quoted (as with ¯') when there is potential ambiguity.CNGewerate a string showing the entire qualified name from the reporting state.D Combines two )s into a single reporter that calls both.& !"#$%&'()*+,-./0123456789:;<=>?@ABCD& !"#$%&'()*+,-./0123456789:;<=>?@ABCD&-.0123456789:;<=>? !"#$%&'()*+,/@BCAD  !"#$%&'()*+,-./0123456789:;<=>?@ABCDNone E-Generate an element for a property definitionF5Generate an element for a set of property definitionsG1Generate an element representing output to stdoutH1Generate an element representing output to stderrI0Generate an element representing a test failure.J4Generate an element representing an error in a test.K+Generate an element for a single test case.L+Generate an element for a skipped test caseM(Generate an element for a test suite runN9Generate the top-level element containing all test suitesO+A reporter that generates JUnit XML reports EThe name/value pairF2A list of name/value pairs to make into propertiesGThe stdout outputHThe stderr outputI%A message associated with the failureJ#A message associated with the errorKThe name of the test.The path to the test (reported as "classname")$The number of assertions in the testThe execution time of the testFElements representing the events that happened during test execution.LThe name of the testThe path of the testM The name of the test suite%The properties defined for this suiteThe number of testsThe number of failuresThe number of errorsThe number of skipped tests1The hostname of the machine on which this was run(The timestamp at which time this was run%The execution time for the test suite 1The testcases and output nodes for the test suiteN The execution time of all suites)Elements representing all the test suitesO EFGHIJKLMNO EFGHIJKLMNO EFGHIJKLMNONone+242P&Provides a way to convert data into a Test or set of Test.Q3Create a test with a given name and tag set from a Testable valueR3Create a test with a given name and no tags from a Testable valueS3Create a test with a given name and no tags from a Testable valueT7Create a test with a synthetic name and no tags from a Testable valueUëDefinition for a test suite. This is intended to be a top-level (ie. non-nestable) container for tests. Test suites have a name, a list of options with default values (which can be overridden either at runtime or statically using  ), and a set of   s to be run.CIndividual tests are described using definitions found in cabal's Distribution.TestSuiteQ module, to allow for straightforward integration with cabal testing facilities.WThe name of the test suite.X-Whether or not to run the tests concurrently.YTA list of all options used by this suite, and the default values for those options.ZThe tests in the suite.°A specialized form of [ to handle lists.[0Allows the extension of the assertion mechanism. Since an ^ can be a sequence of  Assertions and IOe actions, there is a fair amount of flexibility of what can be achieved. As a rule, the resulting ^5 should not assert multiple, independent conditions.7If more complex arrangements of assertions are needed,  s and P should be used.\ Assertion with a failure message]!Assertion with no failure message±8Current counts of assertions, tried, failed, and errors.²Events that have been logged³¬Whether or not the result of the test computation is already reflected here. This is used to differentiate between black box test and tests we've built with these tools.´6String to attach to every failure message as a prefix.µAn ¶* used to abort test execution immediately.·&Whether this is a failure or an error.¸The failure (or error) message._¯Does the actual work of executing a test. This maintains the necessary bookkeeping recording assertions and failures, It also sets up exception handlers and times the test.¹Interface between invisible º/ and the rest of the test execution framework.»HIndicate that the result of a test is already reflected in the testinfo.`4Execute the given computation with a message prefix.aRecord sysout output.bRecord sysout output.c+Record that one assertion has been checked.d&Record an error, along with a message.e'Record a failure, along with a message.f0Get a combined failure message, if there is one.g0Get a combined failure message, if there is one.hUnconditionally signal that a failure has occurred. This will not stop execution, but will record the failure, resulting in a failed test.iTSignal that an assertion succeeded. This will log that an assertion has been made.j@Signal than an error has occurred and stop the test immediately.k™Signal that a failure has occurred and stop the test immediately. Note that if an error has been logged already, the test will be reported as an error.l+Asserts that the specified condition holds.mQSignals an assertion failure if a non-empty message (i.e., a message other than "" ) is passed.nQSignals an assertion failure if a non-empty message (i.e., a message other than ""P) is passed. Allows a prefix to be supplied for the assertion failure message.o Asserts that the specified actual value is equal to the expected value. The output message will contain the prefix, the expected value, and the actual value.)If the prefix is the empty string (i.e., ""R), then the prefix is omitted and only the expected and actual values are output.p>Assert that the given computation throws a specific exception.qPAssert that the given computation throws an exception that matches a predicate.rShorthand for l.sxAsserts that the specified actual value is equal to the expected value (with the expected value on the left-hand side).tvAsserts that the specified actual value is equal to the expected value (with the actual value on the left-hand side).u4Create a test suite from a name and a list of tests.v[Creates a test case resulting from asserting the condition obtained from the specified AssertionPredicable.w”Shorthand for a test case that asserts equality (with the expected value on the left-hand side, and the actual value on the right-hand side).x”Shorthand for a test case that asserts equality (with the actual value on the left-hand side, and the expected value on the right-hand side).y"Creates a test from the specified P., with the specified label attached to it.Since   is Testable6, this can be used as a shorthand way of attaching a  TestLabel to one or more tests.MPQRSTUVWXYZ°¼[\]^º½±²³´µ¾·¸¿ÀÁÂÃ_*The reporter to use for reporting results.The HUnit internal state.The reporter state.The test to run.¹»Ä`abcdefghThe failure messageijkl4The message that is displayed if the assertion fails The conditionm9The message that is displayed with the assertion failure n*Prefix to attach to the string if not nullString to assert is nulloThe message prefixThe expected value The actual valuepException to be caught+Computation that should throw the exceptionqException to be caught+Computation that should throw the exceptionr5A value of which the asserted condition is predicated2A message that is displayed if the assertion failssThe expected valueThe actual valuetThe actual valueThe expected valueuThe suite's name.The tests in the suite.ÅÆÇv5A value of which the asserted condition is predicated+A message that is displayed on test failurewThe expected value The actual valuexThe actual valueThe expected value yÈÉÊËÌÍÎÏÐÑÒÓ8 PQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxy8  UVWXYZuPQRSTwxyv^ihkjlmnoqp[\]str_baced`gf9PQRSTUVWXYZ°¼[\]^º½±²³´µ¾·¸¿ÀÁÂÃ_¹»Ä`abcdefghijklmnopqrstuÅÆÇvwxyÈÉÊËÌÍÎÏÐÑÒÓrstvwxyNonez Execute an individual test case.ÔLog a skipped test case.{½Execute a given test (which may be a group), with the specified selector and report generators. Only tests which match the selector will be executed. The rest will be logged as skipped.|ÿDecide whether to execute a test suite based on a map from suite names to selectors. If the map contains a selector for the test suite, execute all tests matching the selector, and log the rest as skipped. If the map does not contain a selector, do not execute the suite, and do not log its tests as skipped.}ÿVTop-level function for a test run. Given a set of suites and a map from suite names to selectors, execute all suites that have selectors. For any test suite, only the tests specified by its selector will be executed; the rest will be logged as skipped. Suites that do not have a selector will be omitted entirely, and their tests will not be logged as skipped.z"Report generator for the test run.HUnit-Plus internal state.State for the report generator.The test to be executed.Ô"Report generator for the test run.HUnit-Plus internal state.State for the report generator.The test to be executed.{"Report generator for the test run.0The selector to apply to all tests in the suite.HUnit-Plus internal state.State for the report generator.The test to be executed.|3Report generator to use for running the test suite.,The map containing selectors for each suite.State for the report generator.Test suite to be run.}3Report generator to use for running the test suite.The processed filter to use.Test suite to be run.z{|}z{|}zÔ{|}None ~The text-based reporters (‚ and ‡B) construct strings and pass them to the function embodied in a ~]. This function handles the string in one of several ways. Two schemes are defined here. €* writes report lines to a given handle. * accumulates lines for return as a whole.The ~I function is also passed, and returns, an arbitrary state value (called st2 here). The initial state value is given in the ~!; the final value is returned by ƒ.€,Writes persistent lines to the given handle. Accumulates lines for return by ƒ/. The accumulated lines are represented by a Õ (Ö -> Ö)] function whose first argument is the string to be appended to the accumulated report lines.‚ Create a 8 that outputs a textual report for non-terminal output.ƒÿ Execute a test, processing text output according to the given reporting scheme. The reporting scheme's state is threaded through calls to the reporting scheme's function and finally returned, along with final count values. The text is output in non-terminal mode.YThis function is deprecated. The preferred way to run tests is to use the functions in Test.HUnitPlus.Main.„ÿExecute a test suite, processing text output according to the given reporting scheme. The reporting scheme's state is threaded through calls to the reporting scheme's function and finally returned, along with final count values. The text is output in non-terminal mode.YThis function is deprecated. The preferred way to run tests is to use the functions in Test.HUnitPlus.Main.…ÿExecute the given test suites, processing text output according to the given reporting scheme. The reporting scheme's state is threaded through calls to the reporting scheme's function and finally returned, along with final count values. The text is output in non-terminal mode.YThis function is deprecated. The preferred way to run tests is to use the functions in Test.HUnitPlus.Main.†+Converts test execution counts to a string.×NTerminal output function, used by the run*TT function and terminal reporters.‡A reporter that outputs lines indicating progress to the terminal. Reporting is made to standard error, and progress reports are included.ˆÿ Execute a test, processing text output according to the given reporting scheme. The reporting scheme's state is threaded through calls to the reporting scheme's function and finally returned, along with final count values. The text is output in terminal mode.YThis function is deprecated. The preferred way to run tests is to use the functions in Test.HUnitPlus.Main.‰ÿExecute a test suite, processing text output according to the given reporting scheme. The reporting scheme's state is threaded through calls to the reporting scheme's function and finally returned, along with final count values. The text is output in terminal mode.YThis function is deprecated. The preferred way to run tests is to use the functions in Test.HUnitPlus.Main.ŠÿExecute the given test suites, processing text output according to the given reporting scheme. The reporting scheme's state is threaded through calls to the reporting scheme's function and finally returned, along with final count values. The text is output in terminal mode.YThis function is deprecated. The preferred way to run tests is to use the functions in Test.HUnitPlus.Main.~€‚The method for outputting text.&Whether or not to output verbose text.ƒ$A function which accumulates output./Whether or not to run the test in verbose mode.The test to run„$A function which accumulates output.0Whether or not to run the tests in verbose mode.The test suite to run.…#A function which accumulates output/Whether or not to run the test in verbose mode.The test to run†×؇ˆ‰Š ~€‚ƒ„…†‡ˆ‰Š ~€†‚ƒ„…‡ˆ‰Š~€‚ƒ„…†×؇ˆ‰ŠNone+‹,Command-line options for generated programs.ÓA file to which to write a JUnit-style XML report. The list must contain a single value, or be empty, or else the test program will report bad options. If the list is empty, no XML report will be generated.Ž£Filters in string format, specifying which tests should be run. If no filters are given, then all tests will be run. For information on the string format, see Test.HUnitPlus.Filter.ÊA file to which to write a plain-text report. The list must contain a single value, or be empty, or else the test program will report bad options. If the list is empty, no report will be generated.#The behavior of the console output.‘¤Files from which to read testlists. Multiple files may be specified. The contents will be parsed and added to the list of filters specified on the command line.’Console mode options.“*Report extra information as tests execute.”MReport a summary of tests run, skipped, failed, and errored after execution.•„Report test counts interactively during execution, updating the number of tests run, skipped, failed, and errored as they execute.–#Do not generate any console output.—Command-line options for the System.Console.CmdArgs module.Ù&Read and parse a single test list fileÚ Translate an IOError into an error messageÛ#Get the file for reporting XML data˜TCreate a standard test execution program from a set of test suites. The resulting main will process command line options as described, execute the appropirate tests, and exit with success if all tests passed, and fail otherwise.™.Top-level function for executing test suites. ˜¢ is simply a wrapper around this function. This function allows users to supply their own options, and to decide what to do with the result of test execution.‹ŒŽ‘’“”•–—ÙThe command line optionsÚ"Prefix to attach to error messagesException to interpretÛThe command line optionsKA monad parameterized by the xml report handle and the text report handle.˜™ÜThe test suites to runThe filters to use"The mode to use for console outputThe Handle for XML reportingThe Handle for text reporting‹ŒŽ‘’“”•–—˜™‹ŒŽ‘’–•”“—˜™ ‹ŒŽ‘’–•”“—ÙÚÛ˜™Ü NonešCThe basic structure used to create an annotated tree of test cases.›+A name or description for a subtree of the Tests.œ A set of Test*s sharing the same level in the hierarchy.)A single, independent test case composed.š›œÝÞš›œšœ›šœ›ÝÞ NoneG PQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxy‹ŒŽ‘’“”•–—˜™ß              !"#$%&'(()*+,-./01234567889:;<=>>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[[\]^_`abcdefghijklmnopqrstuvwxyz{|}~€‚ƒƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œ   ž Ÿ ¡¢£¤¥¦§¨©¡ª«¬­®¯°±²³´µ¡¶·¸¹º»¼½¡¾¿ÀÁÂÃÄÅýÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖרÙÚÛ¡¶Ü¡ÝÞßàáâãä Ñ åæHUnit-Plus-1.0.1Test.HUnitPlus.BaseTest.HUnitPlus.TerminalTest.HUnitPlus.FilterTest.HUnitPlus.ReportingTest.HUnitPlus.XMLTest.HUnitPlus.ExecutionTest.HUnitPlus.TextTest.HUnitPlus.MainTest.HUnitPlus.LegacyTest.HUnitPlusCabal-1.18.1.3Distribution.TestSuite setOptionoptionstagsnamerun TestInstanceTest groupTests concurrently groupNameGroup ExtraOptionsterminalAppearanceFilter filterSuitesfilterSelectorSelectorselectorInners selectorTags combineTags passFilter allSelectorcombineSelectorssuiteSelectors parseFilterparseFilterFileContentparseFilterFileReporter reporterStart reporterEndreporterStartSuitereporterEndSuitereporterStartCasereporterCaseProgressreporterEndCasereporterSkipCasereporterSystemOutreporterSystemErrreporterFailure reporterErrorNodeLabelPathStatestNamestPathstCounts stOptions stOptionDescsCountscCasescTriedcErrors cFailurescSkippedcAsserts cCaseAsserts zeroCountsdefaultReportershowPath showQualNamecombinedReporter propertyElempropertiesElem systemOutElem systemErrElem failureElem errorElem testcaseElemskippedTestElem testSuiteElemtestSuitesElem xmlReporterTestable testNameTagstestNametestTagstest TestSuite suiteNamesuiteConcurrently suiteOptions suiteTests Assertable assertWithMsgassert Assertion executeTest withPrefix logSysout logSyserr logAssertlogError logFailure getFailures getErrors assertFailure assertSuccess abortError abortFailure assertBool assertStringassertStringWithPrefix assertEqualassertThrowsExact assertThrows@?@=?@?= testSuite~?~=?~?=~:performTestCase performTestperformTestSuiteperformTestSuitesPutTextputTextToHandleputTextToShowS textReporter runTestText runSuiteText runSuitesText showCountsterminalReporter runTestTT runSuiteTT runSuitesTTOpts xmlreportfilters txtreportconsmodetestlist ConsoleModeVerboseTextTerminalQuietopts createMaintopLevel TestLabelTestListTestCasetabase Data.MaybeNothingdiffTagscollectUniversalscollectSelectorscontainers-0.5.5.1 Data.Map.BaseMapGHC.IOFilePathreduceSelector nameParser namesParser pathParser suitesParser tagsParser filterParser makeFilter commentParser lineParserGHC.ShowshowListAssertable tiAssertstiEventstiIgnoreResulttiPrefix TestException GHC.Exception ExceptionteErrorteMsgreportTestInfoTestInfo ignoreResult listAssert errorCode failureCode sysOutCode sysErrCodetestinfo resetTestInfo syntheticNamewrapTest checkTestInfo $fTestable[] $fTestableIO$fTestableTest$fListAssertableIO$fListAssertableChar$fAssertableIO$fAssertable[]$fAssertableProgress$fAssertableResult$fAssertableBool$fAssertable()$fExceptionTestException skipTestCaseShowSGHC.BaseStringtermPuteraseparseTestListsinterpretExceptionwithReportHandles executeTests $fShowTest