<\<      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                                   ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E FGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmno p q r s t u v w xyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                          !!!!!!!!!!!!!!!"""""""""""""""      #!#"###$#%#&#'#(#)#*#+#,#-#.$/$0$1$2$3$4$5$6$7$8$9$:$;$GNone*+,29:;<=DLQRST[ AType level to value level conversion of numbers that are either ynamically or tatically known. 7toNatDS (Proxy ('S 42)) == S 42 toNatDS (Proxy 'D) == DHeterogeneous listsImplemented as nested 2-tuples. >f :: Int ::: Bool ::: Char ::: Z f = 3 ::: False ::: 'X' ::: Z End of listynamically or tatically known valuesMainly used as a promoted type.Operationally exactly the < typeSomething is dynamically known.Something is statically known, in particular: a4Converts a DS value to the corresponding Maybe value'type level numbers are statically known)value level numbers are dynamically known+type level: reify the known natural number n value level: identity"=     =    55%Safe+,9:;DLQRST[C>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~C>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~C>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~&None+,9:;DILQRST[2%Auto detect == 0&#Video For Windows (platform native)'%V4L/V4L2 capturing support via libv4l(Same as CAP_V4L)IEEE 1394 drivers*Same as CAP_FIREWIRE+Same as CAP_FIREWIRE,Same as CAP_FIREWIRE-Same as CAP_FIREWIRE. QuickTime/Unicap drivers0DirectShow (via videoInput)1PvAPI, Prosilica GigE SDK2OpenNI (for Kinect)3OpenNI (for Asus Xtion)4Android - not used5XIMEA Camera API6AAVFoundation framework for iOS (OS X Lion will have the same API)7Smartek Giganetix GigEVisionSDK8+Microsoft Media Foundation (via videoInput)90Microsoft Windows Runtime using Media Foundation:Intel Perceptual Computing SDK;OpenNI2 (for Kinect)<8OpenNI2 (for Asus Xtion and Occipital Structure sensors)=gPhoto2 connection> GStreamer?=Open and record video file or stream using the FFMPEG library@"Image Sequence (e.g. img_%02d.jpg)B3Current position of the video file in milliseconds.C70-based index of the frame to be decoded/captured next.DORelative position of the video file: 0=start of the film, 1=end of the film.E(Width of the frames in the video stream.F)Height of the frames in the video stream.G Frame rate.H4-character code of codec.I#Number of frames in the video file.J?Format of the Mat objects returned by VideoCapture::retrieve().K;Backend-specific value indicating the current capture mode.L+Brightness of the image (only for cameras).M)Contrast of the image (only for cameras).N+Saturation of the image (only for cameras).O$Hue of the image (only for cameras).P%Gain of the image (only for cameras).QExposure (only for cameras).RCBoolean flags indicating whether images should be converted to RGB.SCurrently unsupported.TaRectification flag for stereo cameras (note: only supported by DC1394 v 2.x backend currently).W]DC1394: exposure control done by camera, user can adjust reference level using this feature.fpPop up video/camera filter dialog (note: only supported by DSHOW backend currently. Property value is ignored)iGAny property we need. Meaning of this property depends on the backend.M$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklK$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijkl$%&'()*+,-./0123456789:;<=>?@A(BCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklNone+,9:;DLQRST[I$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklIjklABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghi$%&'()*+,-./0123456789:;<=>?@'Safe+,9:;DLQRST[(None+,9:;DLQRST[sWrapper for mutable values mnopqrs monpqrsmnopqrs)None+,9:;DLQRST[*None+,9:;DLQRST[w!Compression (run length encoding){Binary~#Quality [1..100], > 100 == lossless0..100Btuvwxyz{|}~,tuvwxyz{|}~"t uvwxyz{|}~+None+,9:;<=DLQRST[ ,None+,9:;DLQRST[Number of channels-Safe+,9:;DLQRST[.Safe+,9:;DLQRST[Normalization typeComparison type"/Safe+,9:;DLQRST[0None+,9:;DLQRST[[Types of which a value can be constructed from a pointer to the C equivalent of that value!Used to wrap values created in C.BPerform an IO action with a pointer to the C equivalent of a valuePPerform an action with a temporary pointer to the underlying representation of apThe pointer is not guaranteed to be usuable outside the scope of this function. The same warnings apply as for withForeignPtr.Equivalent type in CJActually a proxy type in Haskell that stands for the equivalent type in C.9Information about the storage requirements of values in C!This class assumes that the type a7 is merely a symbol that corresponds with a type in C.@Computes the storage requirements (in bytes) of values of type a in C.Callback function for trackbars"Callback function for mouse events$Haskell representation of an OpenCV cv::CascadeClassifier object$Haskell representation of an OpenCV cv::VideoWriter object$Haskell representation of an OpenCV cv::VideoCapture object$Haskell representation of an OpenCV cv::Ptr cv::BackgroundSubtractorKNN object$Haskell representation of an OpenCV cv::Ptr cv::BackgroundSubtractorMOG2 object$Haskell representation of an OpenCV cv::FlannBasedMatcher object$Haskell representation of an OpenCV  cv::BFMatcher object$Haskell representation of an OpenCV cv::DescriptorMatcher object$Haskell representation of an OpenCV cv::Ptr cv::SimpleBlobDetector object$Haskell representation of an OpenCV cv::Ptr cv::ORB object$Haskell representation of an OpenCV  cv::DMatch object$Haskell representation of an OpenCV  cv::Keypoint object$Haskell representation of an OpenCV cv::Mat object$Haskell representation of an OpenCV cv::Scalar_<double> object$Haskell representation of an OpenCV  cv::Range object$Haskell representation of an OpenCV cv::TermCriteria object$Haskell representation of an OpenCV cv::RotatedRect object-Haskell representation of an OpenCV exceptionHMutable types use the same underlying representation as unmutable types. is represented as a  .h +Current position of the specified trackbar.Optional pointer to user data. One of the cv::MouseEvenTypes constants.$The x-coordinate of the mouse event.$The y-coordinate of the mouse event. One of the cv::MouseEventFlags constants.Optional pointer to user data.    !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTU[     !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJe     !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTU1None+,9:;<=DLQRST[ VWX VWVWX2None*+,9:;<=DLQRST[ YZ[\] YZYZ[\]3None*+,9:;<=DLQRST[ ^_`a^_^_`a4None+,9:;<=DLQRST[ bcdefgbcbcdefg5Safe+,9:;DLQRST[<Copy source to destination using C++'s placement new featureh<Copy source to destination using C++'s placement new featureLThis method is intended for types that are proxies for actual types in C++. new(dst) CType(*src)WThe copy should be performed by constructing a new object in the memory pointed to by dst4. The new object is initialised using the value of src\. This design allow underlying structures to be shared depending on the implementation of CType.hihihi6None+,9:;DLQRST[jjj7None+,9:;DLQRST[k.Context useful to work with the OpenCV library Based on l, m and n.o8: converts OpenCV basic types to their counterparts in OpenCV.Internal.C.Inline.No p.kqkkq8Safe+,9:;DLQRST[rrr9None+,9:;DLQRST[s&Matx type name, for both Haskell and C Row dimensionColumn dimensionDepth type name in HaskellDepth type name in Css:None+,9:;DLQRST[t'Point type name, for both Haskell and CPoint dimensionPoint template name in CDepth type name in HaskellDepth type name in Ctt;None+,9:;DLQRST[u&Size type name, for both Haskell and CDepth type name in HaskellDepth type name in Cuu<None+,9:;DLQRST[v%Vec type name, for both Haskell and C Vec dimensionDepth type name in HaskellDepth type name in Cvv=None+,29:;DLOQRST["wxyz{|}~{|}wxyz{|}~ None+,29:;DLOQRST[ None*+,9:;<=DLQRST[?    ?   None+,9:;<=DLQRST[#$###$ None+,9:;<=DLQRST[*()-.2378<=AB(-27<A(-27<A*()-.2378<=AB>None+,9:;<=DLQRST[J.A continuous subsequence (slice) of a sequence@The type is used to specify a row or a column span in a matrix (Mat ) and for many other purposes. mkRange a b is basically the same as a:b in Matlab or a..b in Python. As in Python, start is an inclusive left boundary of the range and end is an exclusive right boundary of the range. Such a half-opened interval is usually denoted as  [start, end). Phttp://docs.opencv.org/3.0-last-rst/modules/core/doc/basic_structures.html#rangeOpenCV Sphinx docK-Termination criteria for iterative algorithms Whttp://docs.opencv.org/3.0-last-rst/modules/core/doc/basic_structures.html#termcriteriaOpenCV Sphinx docL1Rotated (i.e. not up-right) rectangles on a planedEach rectangle is specified by the center point (mass center), length of each side (represented by $) and the rotation angle in degrees. Vhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/basic_structures.html#rotatedrectOpenCV Sphinx docM6A 4-element vector with 64 bit floating point elements The type M/ is widely used in OpenCV to pass pixel values. Qhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/basic_structures.html#scalarOpenCV Sphinx docSpecial J< value which means "the whole sequence" or "the whole range"@Perform an action with a temporary pointer to an array of values>The input values are placed consecutively in memory using the  mechanism.wThis function is intended for types which are not managed by the Haskell runtime, but by a foreign system (such as C).pThe pointer is not guaranteed to be usuable outside the scope of this function. The same warnings apply as for .2FGHIJKLMRectangle mass center!Width and height of the rectanglevThe rotation angle (in degrees). When the angle is 0, 90, 180, 270 etc., the rectangle becomes an up-right rectangle.5Optionally the maximum number of iterations/elements. Optionally the desired accuracy.Inclusive start Exlusive end  FGHIJKLM(FGHIJKLM  ?None*+,9:;DLOQRST[ \Tests whether a [* is deserving of its type level attributes2Checks if the properties encoded in the type of a [ correspond to the value level representation. For each property that does not hold this function will produce an error message. If everything checks out it will produce an empty list.%The following properties are checked:DimensionalitySize of each dimensionNumber of channelsDepth (data type of elements);If a property is explicitly encoded as statically unknown (ynamic) it will not be checked.]"Relaxes the type level constraintsOOnly identical or looser constraints are allowed. For tighter constraints use ^.MThis allows you to 'forget' type level guarantees for zero cost. Similar to _, but totally safe.  Identicala to b with a ~ bLooser(' a) to ' or (' a) to (' b) with  a bTighter' to (' a)  Similar to  in that it keeps the  L alive during the execution of the given action but it doesn't extract the   from the  .RAll possible positions (indexes) for a given shape (list of sizes per dimension). ydimPositions [3, 4] [ [0, 0], [0, 1], [0, 2], [0, 3] , [1, 0], [1, 1], [1, 2], [1, 3] , [2, 0], [2, 1], [2, 2], [2, 3] ]  fold over empty fold over the type level listempty direct conversion to identity=NOPQRSTUVWXYZ[\The matrix to be checked.Error messages.] Original [.[ with relaxed constraints.^_  !"#$`a%bThe matrix to be checked.Error messages.c Original [.[ with relaxed constraints.defghijkl&'()*+:NOPQRSTUVWXZY[\]^_  !#$`abcdefghijkl5NOPQRSTUVWXYZ[\]^_  !"#$`a%bcdefghijkl&'()*+None+,9:;DLQRST[mpositionchannelnpositionchannelqp_emn_epqmnmn None+,9:;DLQRST[pNavier-Stokes based method.qMethod by Alexandru Telea.rGRestores the selected region in an image using the region neighborhood.Example: inpaintImg :: forall h h2 w w2 c d . ( Mat (ShapeT [h, w]) ('S c) ('S d) ~ Bikes_512x341 , h2 ~ ((*) h 2) , w2 ~ ((*) w 2) ) => Mat ('S ['S h2, 'S w2]) ('S c) ('S d) inpaintImg = exceptError $ do maskInv <- bitwiseNot mask maskBgr <- cvtColor gray bgr maskInv damaged <- bitwiseAnd bikes_512x341 maskBgr repairedNS <- inpaint 3 InpaintNavierStokes damaged mask repairedT <- inpaint 3 InpaintTelea damaged mask withMatM (Proxy :: Proxy [h2, w2]) (Proxy :: Proxy c) (Proxy :: Proxy d) black $ \imgM -> do matCopyToM imgM (V2 0 0) damaged Nothing matCopyToM imgM (V2 w 0) maskBgr Nothing matCopyToM imgM (V2 0 h) repairedNS Nothing matCopyToM imgM (V2 w h) repairedT Nothing where mask = damageMask w = fromInteger $ natVal (Proxy :: Proxy w) h = fromInteger $ natVal (Proxy :: Proxy h)  %doc/generated/examples/inpaintImg.png inpaintImgs{Perform fastNlMeansDenoising function for colored images. Denoising is not per channel but in a different colour spaceExample: OfastNlMeansDenoisingColoredImg :: forall h w w2 c d . ( Mat (ShapeT [h, w]) ('S c) ('S d) ~ Lenna_512x512 , w2 ~ ((*) w 2) ) => Mat ('S ['S h, 'S w2]) ('S c) ('S d) fastNlMeansDenoisingColoredImg = exceptError $ do denoised <- fastNlMeansDenoisingColored 3 10 7 21 lenna_512x512 withMatM (Proxy :: Proxy [h, w2]) (Proxy :: Proxy c) (Proxy :: Proxy d) black $ \imgM -> do matCopyToM imgM (V2 0 0) lenna_512x512 Nothing matCopyToM imgM (V2 w 0) denoised Nothing where w = fromInteger $ natVal (Proxy :: Proxy w)  9doc/generated/examples/fastNlMeansDenoisingColoredImg.pngfastNlMeansDenoisingColoredImgtPerform fastNlMeansDenoisingColoredMulti function for colored images. Denoising is not pre channel but in a different colour space. This wrapper differs from the original OpenCV version by using all input images and denoising the middle one. The original version would allow to have some arbitrary length vector and slide window over it. As we have to copy the haskell vector before we can use it as  `std::vector`X on the cpp side it is easier to trim the vector before sending and use all frames.Example: lfastNlMeansDenoisingColoredMultiImg :: forall h w w2 c d . ( Mat (ShapeT [h, w]) ('S c) ('S d) ~ Lenna_512x512 , w2 ~ ((*) w 2) ) => Mat ('S ['S h, 'S w2]) ('S c) ('S d) fastNlMeansDenoisingColoredMultiImg = exceptError $ do denoised <- fastNlMeansDenoisingColoredMulti 3 10 7 21 (V.singleton lenna_512x512) withMatM (Proxy :: Proxy [h, w2]) (Proxy :: Proxy c) (Proxy :: Proxy d) black $ \imgM -> do matCopyToM imgM (V2 0 0) lenna_512x512 Nothing matCopyToM imgM (V2 w 0) denoised Nothing where w = fromInteger $ natVal (Proxy :: Proxy w)  >doc/generated/examples/fastNlMeansDenoisingColoredMultiImg.png#fastNlMeansDenoisingColoredMultiImguPerform denoise_TVL1Example: = denoise_TVL1Img :: forall h w w2 c d . ( Mat (ShapeT [h, w]) ('S c) ('S d) ~ Lenna_512x512 , w2 ~ ((*) w 2) ) => Mat ('S ['S h, 'S w2]) ('S c) ('S d) denoise_TVL1Img = exceptError $ do denoised <- matChannelMapM (denoise_TVL1 2 50 . V.singleton) lenna_512x512 withMatM (Proxy :: Proxy [h, w2]) (Proxy :: Proxy c) (Proxy :: Proxy d) black $ \imgM -> do matCopyToM imgM (V2 0 0) lenna_512x512 Nothing matCopyToM imgM (V2 w 0) denoised Nothing where w = fromInteger $ natVal (Proxy :: Proxy w)  *doc/generated/examples/denoise_TVL1Img.pngdenoise_TVL1ImgvPerform decolorVDecolor a color image to a grayscale (1 channel) and a color boosted image (3 channel)Example: decolorImg :: forall h h2 w w2 c d . ( Mat (ShapeT [h, w]) ('S c) ('S d) ~ Bikes_512x341 , h2 ~ ((*) h 2) , w2 ~ ((*) w 2) ) => Mat ('S ['S h2, 'S w2]) ('S c) ('S d) decolorImg = exceptError $ do (bikesGray, boost) <- decolor bikes_512x341 colorGray <- cvtColor gray bgr bikesGray withMatM (Proxy :: Proxy [h2, w2]) (Proxy :: Proxy c) (Proxy :: Proxy d) white $ \imgM -> do matCopyToM imgM (V2 0 0) bikes_512x341 Nothing matCopyToM imgM (V2 0 h) colorGray Nothing matCopyToM imgM (V2 w h) boost Nothing where w = fromInteger $ natVal (Proxy :: Proxy w) h = fromInteger $ natVal (Proxy :: Proxy h)  %doc/generated/examples/decolorImg.png decolorImg,-./0opq1roinpaintRadius - Radius of a circular neighborhood of each point inpainted that is considered by the algorithm. Input image.Inpainting mask. Output image.sParameter regulating filter strength for luminance component. Bigger h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noiseThe same as h but for color components. For most images value equals 10 will be enough to remove colored noise and do not distort colorstemplateWindowSize Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixelssearchWindowSize. Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels"Input image 8-bit 3-channel image.)Output image same size and type as input.tParameter regulating filter strength for luminance component. Bigger h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noiseThe same as h but for color components. For most images value equals 10 will be enough to remove colored noise and do not distort colorstemplateWindowSize Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixelssearchWindowSize. Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels5Vector of odd number of input 8-bit 3-channel images.)Output image same size and type as input.udetails more is more 20Number of iterations that the algorithm will run5Vector of odd number of input 8-bit 3-channel images.)Output image same size and type as input.v Input image.Output images.opqrstuvopqvrust ,-./0opq1rstuv@None*+,2349:;<=DLQRST[,Native Haskell represenation of a rectangle.xyz{|}~23456789xyz{|}~23 xyz{|}~23456789ANone+,9:;DLQRST[:+Rectangle type name, for both Haskell and CDepth type name in HaskellDepth type name in CPoint type name in CSize type name in C::None*+,9:;<=DLQRST[";<=>?@ABCDEFGHIJKLMNOPQRSTUxyz{|}~xyz{|}~";<=>?@ABCDEFGHIJKLMNOPQRSTUBNone+,9:;DLQRST[Initialize the state and the mask using the provided rectangle. After that, run iterCount iterations of the algorithm. The rectangle represents a ROI containing a segmented object. The pixels outside of the ROI are marked as obvious background .-Initialize the state using the provided mask.Combination of GCInitWithRect and GCInitWithMaskN. All the pixels outside of the ROI are automatically initialized with GC_BGD.Just resume the algorithm.VWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*VWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*VWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*CNone+,9:;<=DLQRST[bHGives the number of channels associated with a particular color encodingNames of color encodings( B) Bayer pattern with BG in the second row, second and third column( B) Bayer pattern with GB in the second row, second and third column( B) Bayer pattern with GR in the second row, second and third column(B) Bayer pattern with RG in the second row, second and third column(2) 24 bit RGB color space with channels: (B8:G8:R8)() 15 bit RGB color space() 16 bit RGB color space(6) 32 bit RGBA color space with channels: (B8:G8:R8:A8)()()()()()()()()()()()()( ) Edge-Aware( )(!)(")(#)($)(%)(&)(')(()())(*)(+)(,)(-)(.") 8 bit single channel color space(/)(0)(1)(2)(3)(4)(5)(6)(7)(8)(9)(:)(;)(<)(=)(>)(?)(@)(A)(B)(C)(D)(E2) 24 bit RGB color space with channels: (R8:G8:B8)(F)(G)(H)(I)(J)(K)(L)(M)(N)(O)(P)(Q)(R)(S ) Edge-Aware(T)(U)(V)(W)(X)(Y)(Z)([)(\)(])(^)(_)(`)(a)(b)(c)(d)(e)(f)(g)(h) (i) 9Valid color conversions described by the following graph: doc/color_conversions.png  +    !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghi,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~  +    !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghi+_  +    !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghi,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~None+,9:;DLQRST[      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghi      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghiNone+,9:;<=DLQRST[                           ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w xjklopqtuvyz{~  Ijkoptuyz~ Ijoty~ kpuz                           ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w xjklopqtuvyz{~  None*+,-9:;<=ADLQRST[Representation tag for Repa  ys for OpenCV [s.Converts an OpenCV [rix into a Repa array.This is a zero-copy operation. z { | z | {DNone+,9:;DLQRST[e } ~   c } ~   None*+,9:;<=DLQRST[Identity matrix Rhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/basic_structures.html#mat-eyeOpenCV Sphinx doc -Extract a sub region from a 2D-matrix (image)Example: matSubRectImg :: Mat ('S ['D, 'D]) ('S 3) ('S Word8) matSubRectImg = exceptError $ withMatM (h ::: 2 * w ::: Z) (Proxy :: Proxy 3) (Proxy :: Proxy Word8) white $ \imgM -> do matCopyToM imgM (V2 0 0) birds_512x341 Nothing matCopyToM imgM (V2 w 0) subImg Nothing lift $ rectangle imgM subRect blue 1 LineType_4 0 lift $ rectangle imgM (toRect $ HRect (V2 w 0) (V2 w h) :: Rect2i) blue 1 LineType_4 0 where subRect = toRect $ HRect (V2 96 131) (V2 90 60) subImg = exceptError $ resize (ResizeAbs $ toSize $ V2 w h) InterCubic =<< matSubRect birds_512x341 subRect [h, w] = miShape $ matInfo birds_512x341  (doc/generated/examples/matSubRectImg.png matSubRectImg"<Converts an array to another data type with optional scaling lhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/basic_structures.html?highlight=convertto#mat-converttoOpenCV Sphinx doc#9Create a matrix whose elements are defined by a function.Example: ;matFromFuncImg :: forall size. (size ~ 300) => Mat (ShapeT [size, size]) ('S 4) ('S Word8) matFromFuncImg = exceptError $ matFromFunc (Proxy :: Proxy [size, size]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) example where example [y, x] 0 = 255 - normDist (V2 x y ^-^ bluePt ) example [y, x] 1 = 255 - normDist (V2 x y ^-^ greenPt) example [y, x] 2 = 255 - normDist (V2 x y ^-^ redPt ) example [y, x] 3 = normDist (V2 x y ^-^ alphaPt) example _pos _channel = error "impossible" normDist :: V2 Int -> Word8 normDist v = floor $ min 255 $ 255 * Linear.norm (fromIntegral <$> v) / s' bluePt = V2 0 0 greenPt = V2 s s redPt = V2 s 0 alphaPt = V2 0 s s = fromInteger $ natVal (Proxy :: Proxy size) :: Int s' = fromIntegral s :: Double  )doc/generated/examples/matFromFuncImg.pngmatFromFuncImg%Transforms a given list of matrices of equal shape, channels, and depth, by folding the given function over all matrix elements at each position.  !"Optional scale factor.*Optional delta added to the scaled values.#$%>onNOPQRSTUVWXZY[\]^`abcdfghijkl !"#$%>[\]^`a !"#bcdnofghi$%VWXYZjUTRSPQOkNl  !"#$%None+,9:;DLQRST[&Callback function for trackbars'"Callback function for mouse events("More convenient representation of 00Context for a mouse 1RInformation about which buttons and modifier keys where pressed during the event.?)Create a window with the specified title.<Make sure to free the window when you're done with it using @ or better yet: use A.@FClose the window and free up all resources associated with the window.AwithWindow title act# makes a window with the specified title and passes the resulting > to the computation act-. The window will be destroyed on exit from  withWindowU whether by normal termination or by raising an exception. Make sure not to use the Window outside the act computation!B&Resize a window to the specified size.P &+Current position of the specified trackbar.'0What happened to cause the callback to be fired.$The x-coordinate of the mouse event.$The y-coordinate of the mouse event.SContext for the event, such as buttons and modifier keys pressed during the event.()*+,-./0 123456789:;<=> ?@ABC DEFGHIJ     KL Trackbar name Initial value Maximum valueMN)&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMN)>?@ABC123456789:;<=0DEFGHI()*+,-./J'K&LMN5 &'()*+,-./0 1 23456789:;<=>  ?@ABC DEFGHIJ     KLMNNone+,9:;DLQRST[Q'Reads an image from a buffer in memory.The function reads an image from the specified buffer in the memory. If the buffer is too short or contains invalid data, the empty matrix/image is returned. bhttp://docs.opencv.org/3.0-last-rst/modules/imgcodecs/doc/reading_and_writing_images.html#imdecodeOpenCV Sphinx docS&Encodes an image into a memory buffer.WARNING:" This function is not thread safe! bhttp://docs.opencv.org/3.0-last-rst/modules/imgcodecs/doc/reading_and_writing_images.html#imencodeOpenCV Sphinx docT&Encodes an image into a memory buffer.See S   QRST+tuvwxyz{|}~QRST+QRtuvwxyz{|}~ST   QRSTNone+,9:;DLQRST[UCComputes an optimal affine transformation between two 2D point sets uhttp://docs.opencv.org/3.0-last-rst/modules/video/doc/motion_analysis_and_object_tracking.html#estimaterigidtransformOpenCV Sphinx doc USource Destination Full affineUU UNone+,9:;DLQRST[WVideoFile and backendXVideoDevice and backend        VWXY  Z[\]^_`abcdeVWXYZ[\]^_`abcYVWXZ[\]^_`abc        VWXY  Z[\]^_`abcdeNone!"+,9:;DLQRST[f*Data structure for salient point detectors Shttp://docs.opencv.org/3.0-last-rst/modules/core/doc/basic_structures.html#keypointOpenCV Sphinx dochRectangle mass centeri!Width and height of the rectanglejThe rotation angle (in degrees)UWhen the angle is 0, 90, 180, 270 etc., the rectangle becomes an up-right rectangle.k?The minimal up-right rectangle containing the rotated rectanglepClass for matching keypoint descriptors: query descriptor index, train descriptor index, train image index, and distance between descriptors Qhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/basic_structures.html#dmatchOpenCV Sphinx DocsCoordinates of the keypoints.t1Diameter of the meaningful keypoint neighborhood.uComputed orientation of the keypoint (-1 if not applicable); it's in [0,360) degrees and measured relative to image coordinate system, ie in clockwise.vxThe response by which the most strong keypoints have been selected. Can be used for the further sorting or subsampling.wBOctave (pyramid layer) from which the keypoint has been extracted.xRObject class (if the keypoints need to be clustered by an object they belong to).Query descriptor index.Train descriptor index.Train image index.=         ! " # $ % &f ' (gRectangle mass center!Width and height of the rectanglevThe rotation angle (in degrees). When the angle is 0, 90, 180, 270 etc., the rectangle becomes an up-right rectangle.hijklm5Optionally the maximum number of iterations/elements. Optionally the desired accuracy.nop ) *qrstuvwx +yz{|}~ ,monpqrs #(-27<AFGHIJKLMNOPQRSTUVWXZY[\]^`abcdfghijklxyz{|}~jkoptuyz~  !"#$%fghijklmnopqrstuvwxyz3srmnopqMHIFGLghijklKmJnofqrstuvwxyzp+         ! " # $ % &f ' (ghijklmnop ) *qrstuvwx +yz{|}~ ,None!"+,9:;DLQRST[Flann-based descriptor matcher.This matcher trains  flann::Index_ on a train descriptor collection and calls it nearest search methods to find the best matches. So, this matcher may be faster when matching a large train collection than the brute force matcher. FlannBasedMatcherl does not support masking permissible matches of descriptor sets because flann::Index does not support this.Example: NfbMatcherImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Frog , width2 ~ (*) width 2 ) => IO (Mat (ShapeT [height, width2]) ('S channels) ('S depth)) fbMatcherImg = do let (kpts1, descs1) = exceptError $ orbDetectAndCompute orb frog Nothing (kpts2, descs2) = exceptError $ orbDetectAndCompute orb rotatedFrog Nothing fbmatcher <- newFlannBasedMatcher (def { indexParams = FlannLshIndexParams 20 10 2 }) matches <- match fbmatcher descs1 -- Query descriptors descs2 -- Train descriptors Nothing exceptErrorIO $ pureExcept $ withMatM (Proxy :: Proxy [height, width2]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do matCopyToM imgM (V2 0 0) frog Nothing matCopyToM imgM (V2 width 0) rotatedFrog Nothing -- Draw the matches as lines from the query image to the train image. forM_ matches $ \dmatch -> do let matchRec = dmatchAsRec dmatch queryPt = kpts1 V.! fromIntegral (dmatchQueryIdx matchRec) trainPt = kpts2 V.! fromIntegral (dmatchTrainIdx matchRec) queryPtRec = keyPointAsRec queryPt trainPtRec = keyPointAsRec trainPt -- We translate the train point one width to the right in order to -- match the position of rotatedFrog in imgM. line imgM (round <$> kptPoint queryPtRec :: V2 Int32) ((round <$> kptPoint trainPtRec :: V2 Int32) ^+^ V2 width 0) blue 1 LineType_AA 0 where orb = mkOrb defaultOrbParams {orb_nfeatures = 50} width = fromInteger $ natVal (Proxy :: Proxy width) rotatedFrog = exceptError $ warpAffine frog rotMat InterArea False False (BorderConstant black) rotMat = getRotationMatrix2D (V2 250 195 :: V2 CFloat) 45 0.8  'doc/generated/examples/fbMatcherImg.png fbMatcherImg zhttp://docs.opencv.org/3.0-last-rst/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html#flannbasedmatcherOpenCV Sphinx docBrute-force descriptor matcherFor each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. This descriptor matcher supports masking permissible matches of descriptor sets.Example: $bfMatcherImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Frog , width2 ~ (*) width 2 ) => IO (Mat (ShapeT [height, width2]) ('S channels) ('S depth)) bfMatcherImg = do let (kpts1, descs1) = exceptError $ orbDetectAndCompute orb frog Nothing (kpts2, descs2) = exceptError $ orbDetectAndCompute orb rotatedFrog Nothing bfmatcher <- newBFMatcher Norm_Hamming True matches <- match bfmatcher descs1 -- Query descriptors descs2 -- Train descriptors Nothing exceptErrorIO $ pureExcept $ withMatM (Proxy :: Proxy [height, width2]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do matCopyToM imgM (V2 0 0) frog Nothing matCopyToM imgM (V2 width 0) rotatedFrog Nothing -- Draw the matches as lines from the query image to the train image. forM_ matches $ \dmatch -> do let matchRec = dmatchAsRec dmatch queryPt = kpts1 V.! fromIntegral (dmatchQueryIdx matchRec) trainPt = kpts2 V.! fromIntegral (dmatchTrainIdx matchRec) queryPtRec = keyPointAsRec queryPt trainPtRec = keyPointAsRec trainPt -- We translate the train point one width to the right in order to -- match the position of rotatedFrog in imgM. line imgM (round <$> kptPoint queryPtRec :: V2 Int32) ((round <$> kptPoint trainPtRec :: V2 Int32) ^+^ V2 width 0) blue 1 LineType_AA 0 where orb = mkOrb defaultOrbParams {orb_nfeatures = 50} width = fromInteger $ natVal (Proxy :: Proxy width) rotatedFrog = exceptError $ warpAffine frog rotMat InterArea False False (BorderConstant black) rotMat = getRotationMatrix2D (V2 250 195 :: V2 CFloat) 45 0.8  'doc/generated/examples/bfMatcherImg.png bfMatcherImg rhttp://docs.opencv.org/3.0-last-rst/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html#bfmatcherOpenCV Sphinx docMatch in pre-trained matcher%Extracted blobs have an area between minArea (inclusive) and maxArea (exclusive).$Extracted blobs have circularity '(4 * pi * Area)/(perimeter * perimeter) between minCircularity (inclusive) and maxCircularity (exclusive).SThis filter compares the intensity of a binary image at the center of a blob to  blobColor3. If they differ, the blob is filtered out. Use  blobColor = 0 to extract dark blobs and blobColor = 255 to extract light blobs.LExtracted blobs have convexity (area / area of blob convex hull) between  minConvexity (inclusive) and  maxConvexity (exclusive).(Extracted blobs have this ratio between minInertiaRatio (inclusive) and maxInertiaRatio (exclusive).)The maximum number of features to retain.*Pyramid decimation ratio, greater than 1. M == 2 means the classical pyramid, where each next level has 4x less pixels than the previous, but such a big scale factor will degrade feature matching scores dramatically. On the other hand, too close to 1 scale factor will mean that to cover certain scale range you will need more pyramid levels and so the speed will suffer.kThe number of pyramid levels. The smallest level will have linear size equal to input_image_linear_size /  ** .qThis is size of the border where the features are not detected. It should roughly match the patchSize parameter.-It should be 0 in the current implementation.dThe number of points that produce each element of the oriented BRIEF descriptor. The default value  means the BRIEF where we take a random point pair and compare their brightnesses, so we get 0/1 response. Other possible values are  and . For example,  means that we take 3 random points (of course, those point coordinates are random, but they are generated from the pre-defined seed, so each element of BRIEF descriptor is computed deterministically from the pixel rectangle), find point of maximum brightness and output index of the winner (0, 1 or 2). Such output will occupy 2 bits, and therefore it will need a special variant of Hamming distance, denoted as  (2 bits per bin). When p, we take 4 random points to compute each bin (that will also occupy 2 bits with possible values 0, 1, 2 or 3). The default  means that Harris algorithm is used to rank features (the score is written to KeyPoint::score and is used to retain best nfeatures features); | is alternative value of the parameter that produces slightly less stable keypoints, but it is a little faster to compute.Size of the patch used by the oriented BRIEF descriptor. Of course, on smaller pyramid layers the perceived image area covered by a feature will be larger.(Detect keypoints and compute descriptorsExample:  orbDetectAndComputeImg :: forall (width :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . (Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Frog) => Mat (ShapeT [height, width]) ('S channels) ('S depth) orbDetectAndComputeImg = exceptError $ do (kpts, _descs) <- orbDetectAndCompute orb frog Nothing withMatM (Proxy :: Proxy [height, width]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do void $ matCopyToM imgM (V2 0 0) frog Nothing forM_ kpts $ \kpt -> do let kptRec = keyPointAsRec kpt circle imgM (round <$> kptPoint kptRec :: V2 Int32) 5 blue 1 LineType_AA 0 where orb = mkOrb defaultOrbParams  1doc/generated/examples/orbDetectAndComputeImg.pngorbDetectAndComputeImg(Detect keypoints and compute descriptors - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W XImage.Mask. YImage.Mask. and > norms are preferable choices for SIFT and SURF descriptors,  should be used with , BRISK and BRIEF,  should be used with  when  or  (see ).(If it is false, this is will be default n behaviour when it finds the k nearest neighbors for each query descriptor. If crossCheck == True, then the  knnMatch() method with k=1 will only return pairs (i,j) such that for i-th query descriptor the j-th descriptor in the matcher's collection is the nearest and vice versa, i.e. the  will only return consistent pairs. Such technique usually produces best results with minimal number of outliers when there are enough matches. This is alternative to the ratio test, used by D. Lowe in SIFT paper. Z [TTJ - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N  O P  Q R S T U V W X Y Z [None+,9:;<=DLQRST[)Create a new cascade classifier. Returns  if the classifier is empty after initialization. This usually means that the file could not be loaded (e.g. it doesn't exist, is corrupt, etc.)Example: LcascadeClassifierArnold :: forall (width :: Nat) (height :: Nat) (channels :: Nat) (depth :: * ) . (Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Arnold_small) => IO (Mat (ShapeT [height, width]) ('S channels) ('S depth)) cascadeClassifierArnold = do -- Create two classifiers from data files. Just ccFrontal <- newCascadeClassifier "data/haarcascade_frontalface_default.xml" Just ccEyes <- newCascadeClassifier "data/haarcascade_eye.xml" -- Detect some features. let eyes = ccDetectMultiscale ccEyes arnoldGray faces = ccDetectMultiscale ccFrontal arnoldGray -- Draw the result. pure $ exceptError $ withMatM (Proxy :: Proxy [height, width]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do void $ matCopyToM imgM (V2 0 0) arnold_small Nothing forM_ eyes $ \eyeRect -> lift $ rectangle imgM eyeRect blue 2 LineType_8 0 forM_ faces $ \faceRect -> lift $ rectangle imgM faceRect green 2 LineType_8 0 where arnoldGray = exceptError $ cvtColor bgr gray arnold_small ccDetectMultiscale cc = cascadeClassifierDetectMultiScale cc Nothing Nothing minSize maxSize minSize = Nothing :: Maybe (V2 Int32) maxSize = Nothing :: Maybe (V2 Int32)  2doc/generated/examples/cascadeClassifierArnold.pngcascadeClassifierArnoldPSpecial version which returns bounding rectangle, rejectLevels, and levelWeights \ ] ^ _ ` a b c dScale factor, default is 1.1Min neighbours, default 3"Minimum size. Default: no minimum."Maximum size. Default: no maximum.Scale factor, default is 1.1Min neighbours, default 3"Minimum size. Default: no minimum."Maximum size. Default: no maximum. \ ] ^ _ ` a b c dNone+,9:;DLQRST[ +doc/generated/examples/colorMapAutumImg.pngcolorMapAutumImg *doc/generated/examples/colorMapBoneImg.pngcolorMapBoneImg )doc/generated/examples/colorMapJetImg.pngcolorMapJetImg ,doc/generated/examples/colorMapWinterImg.pngcolorMapWinterImg -doc/generated/examples/colorMapRainbowImg.pngcolorMapRainbowImg +doc/generated/examples/colorMapOceanImg.pngcolorMapOceanImg ,doc/generated/examples/colorMapSummerImg.pngcolorMapSummerImg ,doc/generated/examples/colorMapSpringImg.pngcolorMapSpringImg  *doc/generated/examples/colorMapCoolImg.pngcolorMapCoolImg  )doc/generated/examples/colorMapHsvImg.pngcolorMapHsvImg  *doc/generated/examples/colorMapPinkImg.pngcolorMapPinkImg  )doc/generated/examples/colorMapHotImg.pngcolorMapHotImg  ,doc/generated/examples/colorMapParulaImg.pngcolorMapParulaImg@Applies a GNU Octave/MATLAB equivalent colormap on a given imageJThe human perception isn t built for observing fine changes in grayscale images. Human eyes are more sensitive to observing changes between colors, so you often need to recolor your grayscale images to get a clue about them. OpenCV now comes with various colormaps to enhance the visualization in your computer vision application.Example: grayscaleImg :: forall (height :: Nat) (width :: Nat) depth . (height ~ 30, width ~ 256, depth ~ Word8) => Mat (ShapeT [height, width]) ('S 1) ('S depth) grayscaleImg = exceptError $ matFromFunc (Proxy :: Proxy [height, width]) (Proxy :: Proxy 1) (Proxy :: Proxy depth) grayscale where grayscale :: [Int] -> Int -> Word8 grayscale [_y, x] 0 = fromIntegral x grayscale _pos _channel = error "impossible" type ColorMapImg = Mat (ShapeT [30, 256]) ('S 3) ('S Word8) mkColorMapImg :: ColorMap -> ColorMapImg mkColorMapImg cmap = exceptError $ applyColorMap cmap grayscaleImg colorMapAutumImg :: ColorMapImg colorMapBoneImg :: ColorMapImg colorMapJetImg :: ColorMapImg colorMapWinterImg :: ColorMapImg colorMapRainbowImg :: ColorMapImg colorMapOceanImg :: ColorMapImg colorMapSummerImg :: ColorMapImg colorMapSpringImg :: ColorMapImg colorMapCoolImg :: ColorMapImg colorMapHsvImg :: ColorMapImg colorMapPinkImg :: ColorMapImg colorMapHotImg :: ColorMapImg colorMapParulaImg :: ColorMapImg colorMapAutumImg = mkColorMapImg ColorMapAutumn colorMapBoneImg = mkColorMapImg ColorMapBone colorMapJetImg = mkColorMapImg ColorMapJet colorMapWinterImg = mkColorMapImg ColorMapWinter colorMapRainbowImg = mkColorMapImg ColorMapRainbow colorMapOceanImg = mkColorMapImg ColorMapOcean colorMapSummerImg = mkColorMapImg ColorMapSummer colorMapSpringImg = mkColorMapImg ColorMapSpring colorMapCoolImg = mkColorMapImg ColorMapCool colorMapHsvImg = mkColorMapImg ColorMapHsv colorMapPinkImg = mkColorMapImg ColorMapPink colorMapHotImg = mkColorMapImg ColorMapHot colorMapParulaImg = mkColorMapImg ColorMapParula  'doc/generated/examples/grayscaleImg.png grayscaleImg Thttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/colormaps.html#applycolormapOpenCV Sphinx doc e     f g h i j k l m n o p q r s           e      f g h i j k l m n o p q r sNone+,9:;DLQRST[/Thickness of lines the contours are drawn with.&Draw the contour, filling in the area.-Normal size sans-serif font. Does not have a  variant. $doc/generated/FontHersheySimplex.pngFontHersheySimplexSmall size sans-serif font. "doc/generated/FontHersheyPlain.pngFontHersheyPlain *doc/generated/FontHersheyPlain_slanted.pngFontHersheyPlain0Normal size sans-serif font (more complex than ). Does not have a  variant. #doc/generated/FontHersheyDuplex.pngFontHersheyDuplexNormal size serif font. $doc/generated/FontHersheyComplex.pngFontHersheyComplex ,doc/generated/FontHersheyComplex_slanted.pngFontHersheyComplex*Normal size serif font (more complex than ). $doc/generated/FontHersheyTriplex.pngFontHersheyTriplex ,doc/generated/FontHersheyTriplex_slanted.pngFontHersheyTriplexSmaller version of . )doc/generated/FontHersheyComplexSmall.pngFontHersheyComplexSmall 1doc/generated/FontHersheyComplexSmall_slanted.pngFontHersheyComplexSmall)Hand-writing style font. Does not have a  variant. *doc/generated/FontHersheyScriptSimplex.pngFontHersheyScriptSimplexMore complex variant of . Does not have a  variant. *doc/generated/FontHersheyScriptComplex.pngFontHersheyScriptComplex$8-connected line. doc/generated/LineType_8.png8-connected line%4-connected line. doc/generated/LineType_4.png4-connected line&Antialiased line. doc/generated/LineType_AA.pngAntialised line'EDraws a arrow segment pointing from the first point to the second oneExample: arrowedLineImg :: Mat (ShapeT [200, 300]) ('S 4) ('S Word8) arrowedLineImg = exceptError $ withMatM (Proxy :: Proxy [200, 300]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do arrowedLine imgM (V2 10 130 :: V2 Int32) (V2 190 40 :: V2 Int32) blue 5 LineType_AA 0 0.15 arrowedLine imgM (V2 210 50 :: V2 Int32) (V2 250 180 :: V2 Int32) red 8 LineType_AA 0 0.4  )doc/generated/examples/arrowedLineImg.pngarrowedLineImg `http://docs.opencv.org/3.0.0/d6/d6e/group__imgproc__draw.html#ga0a165a3ca093fd488ac709fdf10c05b2OpenCV Doxygen doc Zhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#arrowedlineOpenCV Sphinx doc(Draws a circle.Example: rcircleImg :: Mat (ShapeT [200, 400]) ('S 4) ('S Word8) circleImg = exceptError $ withMatM (Proxy :: Proxy [200, 400]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do lift $ circle imgM (V2 100 100 :: V2 Int32) 90 blue 5 LineType_AA 0 lift $ circle imgM (V2 300 100 :: V2 Int32) 45 red (-1) LineType_AA 0  $doc/generated/examples/circleImg.png circleImg Uhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#circleOpenCV Sphinx doc)?Draws a simple or thick elliptic arc or fills an ellipse sectorExample: ellipseImg :: Mat (ShapeT [200, 400]) ('S 4) ('S Word8) ellipseImg = exceptError $ withMatM (Proxy :: Proxy [200, 400]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do lift $ ellipse imgM (V2 100 100 :: V2 Int32) (V2 90 60 :: V2 Int32) 30 0 360 blue 5 LineType_AA 0 lift $ ellipse imgM (V2 300 100 :: V2 Int32) (V2 80 40 :: V2 Int32) 160 40 290 red (-1) LineType_AA 0  %doc/generated/examples/ellipseImg.png ellipseImg Vhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#ellipseOpenCV Sphinx doc*Fills a convex polygon. The function *O draws a filled convex polygon. This function is much faster than the function + . It can fill not only convex polygons but any monotonic polygon without self-intersections, that is, a polygon whose contour intersects every horizontal line (scan line) twice at the most (though, its top-most and/or the bottom edge could be horizontal).Example: OfillConvexPolyImg :: forall (h :: Nat) (w :: Nat) . (h ~ 300, w ~ 300) => Mat (ShapeT [h, w]) ('S 4) ('S Word8) fillConvexPolyImg = exceptError $ withMatM (Proxy :: Proxy [h, w]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do lift $ fillConvexPoly imgM pentagon blue LineType_AA 0 where pentagon :: V.Vector (V2 Int32) pentagon = V.fromList [ V2 150 0 , V2 7 104 , V2 62 271 , V2 238 271 , V2 293 104 ]  ,doc/generated/examples/fillConvexPolyImg.pngfillConvexPolyImg ]http://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#fillconvexpolyOpenCV Sphinx doc+/Fills the area bounded by one or more polygons.Example: rookPts :: Int32 -> Int32 -> V.Vector (V.Vector (V2 Int32)) rookPts w h = V.singleton $ V.fromList [ V2 ( w `div` 4) ( 7*h `div` 8) , V2 ( 3*w `div` 4) ( 7*h `div` 8) , V2 ( 3*w `div` 4) (13*h `div` 16) , V2 ( 11*w `div` 16) (13*h `div` 16) , V2 ( 19*w `div` 32) ( 3*h `div` 8) , V2 ( 3*w `div` 4) ( 3*h `div` 8) , V2 ( 3*w `div` 4) ( h `div` 8) , V2 ( 26*w `div` 40) ( h `div` 8) , V2 ( 26*w `div` 40) ( h `div` 4) , V2 ( 22*w `div` 40) ( h `div` 4) , V2 ( 22*w `div` 40) ( h `div` 8) , V2 ( 18*w `div` 40) ( h `div` 8) , V2 ( 18*w `div` 40) ( h `div` 4) , V2 ( 14*w `div` 40) ( h `div` 4) , V2 ( 14*w `div` 40) ( h `div` 8) , V2 ( w `div` 4) ( h `div` 8) , V2 ( w `div` 4) ( 3*h `div` 8) , V2 ( 13*w `div` 32) ( 3*h `div` 8) , V2 ( 5*w `div` 16) (13*h `div` 16) , V2 ( w `div` 4) (13*h `div` 16) ] fillPolyImg :: forall (h :: Nat) (w :: Nat) . (h ~ 300, w ~ 300) => Mat (ShapeT [h, w]) ('S 4) ('S Word8) fillPolyImg = exceptError $ withMatM (Proxy :: Proxy [h, w]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do lift $ fillPoly imgM (rookPts w h) blue LineType_AA 0 where h = fromInteger $ natVal (Proxy :: Proxy h) w = fromInteger $ natVal (Proxy :: Proxy w)  &doc/generated/examples/fillPolyImg.png fillPolyImg Whttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#fillpolyOpenCV Sphinx doc,Draws several polygonal curvesExample: polylinesImg :: forall (h :: Nat) (w :: Nat) . (h ~ 300, w ~ 300) => Mat (ShapeT [h, w]) ('S 4) ('S Word8) polylinesImg = exceptError $ withMatM (Proxy :: Proxy [h, w]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do lift $ polylines imgM (rookPts w h) True blue 2 LineType_AA 0 where h = fromInteger $ natVal (Proxy :: Proxy h) w = fromInteger $ natVal (Proxy :: Proxy w)  'doc/generated/examples/polylinesImg.png polylinesImg Xhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#polylinesOpenCV Sphinx doc-+Draws a line segment connecting two points.Example: lineImg :: Mat (ShapeT [200, 300]) ('S 4) ('S Word8) lineImg = exceptError $ withMatM (Proxy :: Proxy [200, 300]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do lift $ line imgM (V2 10 130 :: V2 Int32) (V2 190 40 :: V2 Int32) blue 5 LineType_AA 0 lift $ line imgM (V2 210 50 :: V2 Int32) (V2 250 180 :: V2 Int32) red 8 LineType_AA 0  "doc/generated/examples/lineImg.pnglineImg Shttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#lineOpenCV Sphinx doc.=Calculates the size of a box that contains the specified text Zhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#gettextsizeOpenCV Sphinx doc/Draws a text string.The function putText renders the specified text string in the image. Symbols that cannot be rendered using the specified font are replaced by question marks.Example: putTextImg :: Mat ('S ['D, 'S 400]) ('S 4) ('S Word8) putTextImg = exceptError $ withMatM (height ::: (Proxy :: Proxy 400) ::: Z) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do forM_ (zip [0..] [minBound .. maxBound]) $ \(n, fontFace) -> lift $ putText imgM (T.pack $ show fontFace) (V2 10 (35 + n * 30) :: V2 Int32) (Font fontFace NotSlanted 1.0) black 1 LineType_AA False where height :: Int32 height = 50 + fromIntegral (30 * fromEnum (maxBound :: FontFace))  %doc/generated/examples/putTextImg.png putTextImg Vhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#puttextOpenCV Sphinx doc03Draws a simple, thick, or filled up-right rectangleExample: rectangleImg :: Mat (ShapeT [200, 400]) ('S 4) ('S Word8) rectangleImg = exceptError $ withMatM (Proxy :: Proxy [200, 400]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do lift $ rectangle imgM (toRect $ HRect (V2 10 10) (V2 180 180)) blue 5 LineType_8 0 lift $ rectangle imgM (toRect $ HRect (V2 260 30) (V2 80 140)) red (-1) LineType_8 0  'doc/generated/examples/rectangleImg.png rectangleImg Xhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#rectangleOpenCV Sphinx doc1!Draw contours onto a black image.Example: zflowerContours :: Mat ('S ['S 512, 'S 768]) ('S 3) ('S Word8) flowerContours = exceptError $ withMatM (Proxy :: Proxy [512,768]) (Proxy :: Proxy 3) (Proxy :: Proxy Word8) black $ \imgM -> do edges <- thaw $ exceptError $ cvtColor bgr gray flower_768x512 >>= canny 30 20 Nothing CannyNormL1 contours <- findContours ContourRetrievalList ContourApproximationSimple edges lift $ drawContours (V.map contourPoints contours) red (OutlineContour LineType_AA 1) imgM  )doc/generated/examples/flowerContours.pngflowerContours24Draws a marker on a predefined position in an image.0The marker will be drawn as as a 20-pixel cross.Example: markerImg :: Mat (ShapeT [100, 100]) ('S 4) ('S Word8) markerImg = exceptError $ withMatM (Proxy :: Proxy [100, 100]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do lift $ marker imgM (50 :: V2 Int32) blue  $doc/generated/examples/markerImg.png markerImgA t u v w x y z { | } ~  !"#$%& 'Image. The point the arrow starts from.The point the arrow points to. Line color.Line thickness.3Number of fractional bits in the point coordinates.<The length of the arrow tip in relation to the arrow length.( Image where the circle is drawn.Center of the circle.Radius of the circle. Circle color.kThickness of the circle outline, if positive. Negative thickness means that a filled circle is to be drawn.Type of the circle boundary.SNumber of fractional bits in the coordinates of the center and in the radius value.) Image.Center of the ellipse.*Half of the size of the ellipse main axes."Ellipse rotation angle in degrees..Starting angle of the elliptic arc in degrees.,Ending angle of the elliptic arc in degrees.Ellipse color.{Thickness of the ellipse arc outline, if positive. Otherwise, this indicates that a filled ellipse sector is to be drawn.Type of the ellipse boundary. NNumber of fractional bits in the coordinates of the center and values of axes.*Image.Polygon vertices.Polygon color.4Number of fractional bits in the vertex coordinates.+Image. Polygons.Polygon color.4Number of fractional bits in the vertex coordinates.,Image. Vertices.Flag indicating whether the drawn polylines are closed or not. If they are closed, the function draws a line from the last vertex of each curve to its first vertex. Thickness of the polyline edges.4Number of fractional bits in the vertex coordinates.-Image. First point of the line segment. Scond point of the line segment. Line color.Line thickness.3Number of fractional bits in the point coordinates..+Thickness of lines used to render the text.(size, baseLine) = (The size of a box that contains the specified text. , y-coordinate of the baseline relative to the bottom-most text point)/Image.Text string to be drawn.3Bottom-left corner of the text string in the image. Text color.+Thickness of the lines used to draw a text.When  ^, the image data origin is at the bottom-left corner. Otherwise, it is at the top-left corner.0Image.0Rectangle color or brightness (grayscale image).Line thickness.3Number of fractional bits in the point coordinates. 1Color of the contours.Image.2 The image to draw the marker on.,The point where the crosshair is positioned. Line color.$ !"#$%&'()*+,-./012$#$%& !"'()*+,-./012. t u v w x y z { | } ~  !"#$%& '()*+,-./0 12None+,2349:;<=DLQRST[@'Harris detector and it free k parameterFSA flag, indicating whether to use the more accurate L2 norm or the default L1 norm.I"Finds edges in an image using the  Mhttp://docs.opencv.org/2.4/modules/imgproc/doc/feature_detection.html#canny86Canny86 algorithm.Example: cannyImg :: forall shape channels depth . (Mat shape channels depth ~ Lambda) => Mat shape ('S 1) depth cannyImg = exceptError $ canny 30 200 Nothing CannyNormL1 lambda  #doc/generated/examples/cannyImg.pngcannyImgJ&Determines strong corners on an image.\The function finds the most prominent corners in the image or in the specified image region.wFunction calculates the corner quality measure at every source image pixel using the cornerMinEigenVal or cornerHarris.dFunction performs a non-maximum suppression (the local maximums in 3 x 3 neighborhood are retained).2The corners with the minimal eigenvalue less than .֚֞֊֢֕֒֝{֎֟֎֕ * max(x,y) qualityMeasureMap(x,y) are rejected.PThe remaining corners are sorted by the quality measure in the descending order.jFunction throws away each corner for which there is a stronger corner at a distance less than maxDistance.Example: goodFeaturesToTrackTraces :: forall (width :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . (Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Frog) => Mat (ShapeT [height, width]) ('S channels) ('S depth) goodFeaturesToTrackTraces = exceptError $ do imgG <- cvtColor bgr gray frog let features = goodFeaturesToTrack imgG 20 0.01 0.5 Nothing Nothing CornerMinEigenVal withMatM (Proxy :: Proxy [height, width]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do void $ matCopyToM imgM (V2 0 0) frog Nothing forM_ features $ \f -> do circle imgM (round <$> f :: V2 Int32) 2 blue 5 LineType_AA 0  4doc/generated/examples/goodFeaturesToTrackTraces.pnggoodFeaturesToTrackTracesKTFinds circles in a grayscale image using a modification of the Hough transformation.Example:  houghCircleTraces :: forall (width :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . (Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Circles_1000x625) => Mat (ShapeT [height, width]) ('S channels) ('S depth) houghCircleTraces = exceptError $ do imgG <- cvtColor bgr gray circles_1000x625 let circles = houghCircles 1 10 Nothing Nothing Nothing Nothing imgG withMatM (Proxy :: Proxy [height, width]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do void $ matCopyToM imgM (V2 0 0) circles_1000x625 Nothing forM_ circles $ \c -> do circle imgM (round <$> circleCenter c :: V2 Int32) (round (circleRadius c)) blue 1 LineType_AA 0  ,doc/generated/examples/houghCircleTraces.pnghoughCircleTracesLExample: houghLinesPTraces :: forall (width :: Nat) (height :: Nat) (channels :: Nat) (depth :: * ) . (Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Building_868x600) => Mat (ShapeT [height, width]) ('S channels) ('S depth) houghLinesPTraces = exceptError $ do edgeImg <- canny 50 200 Nothing CannyNormL1 building_868x600 edgeImgBgr <- cvtColor gray bgr edgeImg withMatM (Proxy :: Proxy [height, width]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do edgeImgM <- thaw edgeImg lineSegments <- houghLinesP 1 (pi / 180) 80 (Just 30) (Just 10) edgeImgM void $ matCopyToM imgM (V2 0 0) edgeImgBgr Nothing forM_ lineSegments $ \lineSegment -> do line imgM (lineSegmentStart lineSegment) (lineSegmentStop lineSegment) red 2 LineType_8 0  ,doc/generated/examples/houghLinesPTraces.pnghoughLinesPTraces ;<=>?@ABCDEFGHI-First threshold for the hysteresis procedure..Second threshold for the hysteresis procedure.Aperture size for the Sobel()) operator. If not specified defaults to 3. Must be 3, 5 or 7.SA flag, indicating whether to use the more accurate L2 norm or the default L1 norm.8-bit input image.J;Input 8-bit or floating-point 32-bit, single-channel image.rMaximum number of corners to return. If there are more corners than are found, the strongest of them is returned.Parameter characterizing the minimal accepted quality of image corners. The parameter value is multiplied by the best corner quality measure, which is the minimal eigenvalue (see cornerMinEigenVal ) or the Harris function response (see cornerHarris ). The corners with the quality measure less than the product are rejected. For example, if the best corner has the quality measure = 1500, and the qualityLevel=0.01 , then all the corners with the quality measure less than 15 are rejected.AMinimum possible Euclidean distance between the returned corners.Optional region of interest. If the image is not empty (it needs to have the type CV_8UC1 and the same size as image ), it specifies the region in which the corners are detected.Size of an average block for computing a derivative covariation matrix over each pixel neighborhood. See cornerEigenValsAndVecs._Parameter indicating whether to use a Harris detector (see cornerHarris) or cornerMinEigenVal.KVInverse ratio of the accumulator resolution to the image resolution. For example, if dp=1B, the accumulator has the same resolution as the input image. If dp=23, the accumulator has half as big width and height.Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed..The higher threshold of the two passed to the IA edge detector (the lower one is twice smaller). Default is 100.The accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. Default is 100.Minimum circle radius.Maximum circle radius.L1Distance resolution of the accumulator in pixels./Angle resolution of the accumulator in radians.dAccumulator threshold parameter. Only those lines are returned that get enough votes (> threshold).BMinimum line length. Line segments shorter than that are rejected.AMaximum allowed gap between points on the same line to link them..Source image. May be modified by the function.M;<=>?@ABCDEFGHIJKLIJKL?@AFGHBCDE;<=> ;<=>?@ABCDEFGHIJKLMNone+,9:;DLQRST[ YConnectivity value. The default value of 4 means that only the four nearest neighbor pixels (those that share an edge) are considered. A connectivity value of 8 means that the eight nearest neighbor pixels (those that share a corner) will be considered.ZMValue between 1 and 255 with which to fill the mask (the default value is 1).[If set, the difference between the current pixel and seed pixel is considered. Otherwise, the difference between neighbor pixels is considered (that is, the range is floating).\If set, the function does not change the image ( newVal is ignored), and only fills the mask with the value specified in bits 8-16 of flags as described above. This option only make sense in function variants that have the mask parameter.]1Converts an image from one color space to another;The function converts an input image from one color space to another. In case of a transformation to-from RGB color space, the order of the channels should be specified explicitly (RGB or BGR). Note that the default color format in OpenCV is often referred to as RGB but it is actually BGR (the bytes are reversed). So the first byte in a standard (24-bit) color image will be an 8-bit Blue component, the second byte will be Green, and the third byte will be Red. The fourth, fifth, and sixth bytes would then be the second pixel (Blue, then Green, then Red), and so on.;The conventional ranges for R, G, and B channel values are: 0 to 255 for   images0 to 65535 for   images 0 to 1 for   imagesIn case of linear transformations, the range does not matter. But in case of a non-linear transformation, an input RGB image should be normalized to the proper value range to get the correct results, for example, for RGB to L*u*v* transformation. For example, if you have a 32-bit floating-point image directly converted from an 8-bit image without any scaling, then it will have the 0..255 value range instead of 0..1 assumed by the function. So, before calling ]), you need first to scale the image down: * cvtColor (img * 1/255) 'ColorConvBGR2Luv' If you use ] with 8-bit images, the conversion will have some information lost. For many applications, this will not be noticeable but it is recommended to use 32-bit images in applications that need the full range of colors or that convert an image before an operation and then convert back.pIf conversion adds the alpha channel, its value will set to the maximum of corresponding channel range: 255 for   , 65535 for  , 1 for  .Example: cvtColorImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Birds_512x341 , width2 ~ (width + width) ) => Mat (ShapeT [height, width2]) ('S channels) ('S depth) cvtColorImg = exceptError $ withMatM ((Proxy :: Proxy height) ::: (Proxy :: Proxy width2) ::: Z) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do birds_gray <- pureExcept $ cvtColor gray bgr =<< cvtColor bgr gray birds_512x341 matCopyToM imgM (V2 0 0) birds_512x341 Nothing matCopyToM imgM (V2 w 0) birds_gray Nothing lift $ arrowedLine imgM (V2 startX midY) (V2 pointX midY) red 4 LineType_8 0 0.15 where h, w :: Int32 h = fromInteger $ natVal (Proxy :: Proxy height) w = fromInteger $ natVal (Proxy :: Proxy width) startX, pointX :: Int32 startX = round $ fromIntegral w * (0.95 :: Double) pointX = round $ fromIntegral w * (1.05 :: Double) midY = h `div` 2  &doc/generated/examples/cvtColorImg.png cvtColorImg http://goo.gl/3rfrhuOpenCV Sphinx Doc^ The function ^S fills a connected component starting from the seed point with the specified color.The connectivity is determined by the color/brightness closeness of the neighbor pixels. See the OpenCV documentation for details on the algorithm.Example: floodFillImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Sailboat_768x512 , width2 ~ (width + width) ) => Mat (ShapeT [height, width2]) ('S channels) ('S depth) floodFillImg = exceptError $ withMatM ((Proxy :: Proxy height) ::: (Proxy :: Proxy width2) ::: Z) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do sailboatEvening_768x512 <- thaw sailboat_768x512 mask <- mkMatM (Proxy :: Proxy [height + 2, width + 2]) (Proxy :: Proxy 1) (Proxy :: Proxy Word8) black circle mask (V2 450 120 :: V2 Int32) 45 white (-1) LineType_AA 0 rect <- floodFill sailboatEvening_768x512 (Just mask) seedPoint eveningRed (Just tolerance) (Just tolerance) defaultFloodFillOperationFlags rectangle sailboatEvening_768x512 rect blue 2 LineType_8 0 frozenSailboatEvening_768x512 <- freeze sailboatEvening_768x512 matCopyToM imgM (V2 0 0) sailboat_768x512 Nothing matCopyToM imgM (V2 w 0) frozenSailboatEvening_768x512 Nothing lift $ arrowedLine imgM (V2 startX midY) (V2 pointX midY) red 4 LineType_8 0 0.15 where h, w :: Int32 h = fromInteger $ natVal (Proxy :: Proxy height) w = fromInteger $ natVal (Proxy :: Proxy width) startX, pointX :: Int32 startX = round $ fromIntegral w * (0.95 :: Double) pointX = round $ fromIntegral w * (1.05 :: Double) midY = h `div` 2 seedPoint :: V2 Int32 seedPoint = V2 100 50 eveningRed :: V4 Double eveningRed = V4 0 100 200 255 tolerance :: V4 Double tolerance = pure 7  'doc/generated/examples/floodFillImg.png floodFillImg http://goo.gl/9XIIneOpenCV Sphinx Doc`5Applies a fixed-level threshold to each array element?The function applies fixed-level thresholding to a single-channel array. The function is typically used to get a bi-level (binary) image out of a grayscale image or for removing a noise, that is, filtering out pixels with too small or too large values. There are several types of thresholding supported by the function.Example: }grayBirds :: Mat (ShapeT [341, 512]) ('S 1) ('S Word8) grayBirds = exceptError $ cvtColor bgr gray birds_512x341 threshBinaryBirds :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) threshBinaryBirds = exceptError $ cvtColor gray bgr $ fst $ exceptError $ threshold (ThreshVal_Abs 100) (Thresh_Binary 150) grayBirds threshBinaryInvBirds :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) threshBinaryInvBirds = exceptError $ cvtColor gray bgr $ fst $ exceptError $ threshold (ThreshVal_Abs 100) (Thresh_BinaryInv 150) grayBirds threshTruncateBirds :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) threshTruncateBirds = exceptError $ cvtColor gray bgr $ fst $ exceptError $ threshold (ThreshVal_Abs 100) Thresh_Truncate grayBirds threshToZeroBirds :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) threshToZeroBirds = exceptError $ cvtColor gray bgr $ fst $ exceptError $ threshold (ThreshVal_Abs 100) Thresh_ToZero grayBirds threshToZeroInvBirds :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) threshToZeroInvBirds = exceptError $ cvtColor gray bgr $ fst $ exceptError $ threshold (ThreshVal_Abs 100) Thresh_ToZeroInv grayBirds  ,doc/generated/examples/threshBinaryBirds.pngthreshBinaryBirds  /doc/generated/examples/threshBinaryInvBirds.pngthreshBinaryInvBirds  .doc/generated/examples/threshTruncateBirds.pngthreshTruncateBirds  ,doc/generated/examples/threshToZeroBirds.pngthreshToZeroBirds /doc/generated/examples/threshToZeroInvBirds.pngthreshToZeroInvBirds dhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/miscellaneous_transformations.html#thresholdOpenCV Sphinx docaIPerforms a marker-based image segmentation using the watershed algorithm.The function implements one of the variants of watershed, non-parametric marker-based segmentation algorithm, described in [Meyer, F. Color Image Segmentation, ICIP92, 1992].0Before passing the image to the function, you have to roughly outline the desired regions in the image markers with positive (>0) indices. So, every region is represented as one or more connected components with the pixel values 1, 2, 3, and so on. Such markers can be retrieved from a binary mask using  findContours and  drawContoursO. The markers are seeds  of the future image regions. All the other pixels in markers , whose relation to the outlined regions is not known and should be defined by the algorithm, should be set to 0 s. In the function output, each pixel in markers is set to a value of the seed  components or to -1 at boundaries between the regions. dhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/miscellaneous_transformations.html#watershedOpenCV Sphinx docb Runs the  bhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/miscellaneous_transformations.html#grabcutGrabCut algorithm.Example: ]grabCutBird :: Birds_512x341 grabCutBird = exceptError $ do mask <- withMatM (Proxy :: Proxy [341, 512]) (Proxy :: Proxy 1) (Proxy :: Proxy Word8) black $ \mask -> do fgTmp <- mkMatM (Proxy :: Proxy [1, 65]) (Proxy :: Proxy 1) (Proxy :: Proxy Double) black bgTmp <- mkMatM (Proxy :: Proxy [1, 65]) (Proxy :: Proxy 1) (Proxy :: Proxy Double) black grabCut birds_512x341 mask fgTmp bgTmp 5 (GrabCut_InitWithRect rect) mask' <- matScalarCompare mask 3 Cmp_Ge withMatM (Proxy :: Proxy [341, 512]) (Proxy :: Proxy 3) (Proxy :: Proxy Word8) transparent $ \imgM -> do matCopyToM imgM (V2 0 0) birds_512x341 (Just mask') where rect :: Rect Int32 rect = toRect $ HRect { hRectTopLeft = V2 264 60, hRectSize = V2 248 281 }  &doc/generated/examples/grabCutBird.png grabCutBirdc=Returns 0 if the pixels are not in the range, 255 otherwise.  WXYZ[\] Convert from &. Make sure the source image has this  Convert to . Source image^Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set.Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller than image. Since this is both an input and output parameter, you must take responsibility of initializing it. Flood-filling cannot go across non-zero pixels in the input mask. For example, an edge detector output can be used as a mask to stop filling at edges. On output, pixels in the mask corresponding to filled pixels in the image are set to 1 or to the a value specified in flags as described below. It is therefore possible to use the same mask in multiple calls to the function to make sure the filled areas do not overlap. Note: Since the mask is larger than the filled image, a pixel (x, y) in image corresponds to the pixel (x+1, y+1) in the mask.Starting point.)New value of the repainted domain pixels.Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. Zero by default.Maximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. Zero by default._ `aInput 8-bit 3-channel image9Input/output 32-bit single-channel image (map) of markersbInput 8-bit 3-channel image.Input/output 8-bit single-channel mask. The mask is initialized by the function when mode is set to GC_INIT_WITH_RECT. Its elements may have one of following values:,GC_BGD defines an obvious background pixels.4GC_FGD defines an obvious foreground (object) pixel..GC_PR_BGD defines a possible background pixel..GC_PR_FGD defines a possible foreground pixel.cTemporary array for the background model. Do not modify it while you are processing the same image.dTemporary arrays for the foreground model. Do not modify it while you are processing the same image.Number of iterations the algorithm should make before returning the result. Note that the result can be refined with further calls with mode==GC_INIT_WITH_MASK or mode==GC_EVAL.Operation modec Lower bound Upper bound      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghiWXYZ[\]^_`abc]^WXYZ[\_`abc WXYZ[\]^_ `abcNone+,9:;DLQRST[d"Whether to use normalisation. See g.g [http://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/object_detection.html#matchtemplateOpenCV Sphinx doch not normed: ]http://docs.opencv.org/3.0-last-rst/_images/math/f096a706cb9499736423f10d901c7fe13a1e6926.pngnormed: ]http://docs.opencv.org/3.0-last-rst/_images/math/6d6a720237b3a4c1365c8e86a9cfcf0895d5e265.pngi not normed: ]http://docs.opencv.org/3.0-last-rst/_images/math/93f1747a86a3c5095a0e6a187442c6e2a0ae0968.pngnormed: ]http://docs.opencv.org/3.0-last-rst/_images/math/6a72ad9ae17c4dad88e33ed16308fc1cfba549b8.pngj not normed: ]http://docs.opencv.org/3.0-last-rst/_images/math/c9b62df96d0692d90cc1d8a5912a68a44461910c.pngwhere ]http://docs.opencv.org/3.0-last-rst/_images/math/ffb6954b6020b02e13b73c79bd852c1627cfb79c.pngnormed: ]http://docs.opencv.org/3.0-last-rst/_images/math/235e42ec68d2d773899efcf0a4a9d35a7afedb64.pngk [http://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/object_detection.html#matchtemplateOpenCV Sphinx doc5Compares a template against overlapped image regions.MThe function slides through image, compares the overlapped patches of size  ]http://docs.opencv.org/3.0-last-rst/_images/math/d47153257f0243694e5632bb23b85009eb9e5599.png w times h against templ using the specified method and stores the comparison results in result . Here are the formulae for the available comparison methods ( ]http://docs.opencv.org/3.0-last-rst/_images/math/06f9f0fcaa8d96a6a23b0f7d1566fe5efaa789ad.pngI denotes image,  ]http://docs.opencv.org/3.0-last-rst/_images/math/87804527283a4539e1e17c5861df8cb92a97fd6d.pngT template,  ]http://docs.opencv.org/3.0-last-rst/_images/math/8fa391da5431a5d6eaba1325c3e7cb3da22812b5.pngRH result). The summation is done over template and/or the image patch: ]http://docs.opencv.org/3.0-last-rst/_images/math/ff90cafd4a71d85875237787b54815ee8ac77bff.pngx' = 0...w-1, y' = 0...h-1 defghij kMImage where the search is running. It must be 8-bit or 32-bit floating-point.\Searched template. It must be not greater than the source image and have the same data type.+Parameter specifying the comparison method. NormaliseZMap of comparison results. It must be single-channel 32-bit floating-point. If image is  ]http://docs.opencv.org/3.0-last-rst/_images/math/e4926c3d97c3f7434c6317ba24b8b9294a0aba64.png and templ is  ]http://docs.opencv.org/3.0-last-rst/_images/math/d47153257f0243694e5632bb23b85009eb9e5599.png , then result is  ]http://docs.opencv.org/3.0-last-rst/_images/math/e318d7237b57e08135e689fd9136b9ac8e4a4102.png.defghijkghijdefk defghij kNone+,9:;DLQRST[ tKStores absolutely all the contour points. That is, any 2 subsequent points (x1,y1) and (x2,y2)T of the contour will be either horizontal, vertical or diagonal neighbors, that is, max(abs(x1-x2),abs(y2-y1)) == 1.uCompresses horizontal, vertical, and diagonal segments and leaves only their end points. For example, an up-right rectangular contour is encoded with 4 points.y*Retrieves only the extreme outer contours.zRRetrieves all of the contours without establishing any hierarchical relationships.{-Retrieves all of the contours and organizes them into a two-level hierarchy. At the top level, there are external boundaries of the components. At the second level, there are boundaries of the holes. If there is another contour inside a hole of a connected component, it is still put at the top level.|SRetrieves all of the contours and reconstructs a full hierarchy of nested contours.}Oriented area flag.~Return a signed area value, depending on the contour orientation (clockwise or counter-clockwise). Using this feature you can determine orientation of a contour by taking the sign of an area.%Return the area as an absolute value.Calculates a contour area.3The function computes a contour area. Similarly to moments!, the area is computed using the  /https://en.wikipedia.org/wiki/Green%27s_theorem Green formula[. Thus, the returned area and the number of non-zero pixels, if you draw the contour using  drawContours or fillPolyu, can be different. Also, the function will most certainly give a wrong results for contours with self-intersections. http://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=contourarea#cv2.contourAreaOpenCV Sphinx doc!Performs a point-in-contour test.The function determines whether the point is inside a contour, outside, or lies on an edge (or coincides with a vertex). It returns positive (inside), negative (outside), or zero (on an edge) value, correspondingly. When measureDist=false , the return value is +1, -1, and 0, respectively. Otherwise, the return value is a signed distance between the point and the nearest contour edge. whttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html#pointpolygontestOpenCV Sphinx doc?Approximates a polygonal curve(s) with the specified precision.The functions approxPolyDP approximate a curve or a polygon with another curve/polygon with less vertices so that the distance between them is less or equal to the specified precision. It uses the <http://en.wikipedia.org/wiki/Ramer-Douglas-Peucker_algorithmDouglas-Peucker algorithm http://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=contourarea#approxpolydp) opqrstuvwxyz{|}~-Input vector of 2D points (contour vertices).Signed or unsigned areaContour.!Point tested against the contour.If true, the function estimates the signed distance from the point to the nearest contour edge. Otherwise, the function only checks if the point is inside a contour or not. epsilon is closed is closedopqrstuvwxyz{|}~opqr}~xyz{|stuvw opqrstuvwxyz{|}~ None+,9:;DLQRST[  1D example: iiiiii|abcdefgh|iiiiiii with some specified i 1D example: aaaaaa|abcdefgh|hhhhhhh 1D example: fedcba|abcdefgh|hgfedcb 1D example: cdefgh|abcdefgh|abcdefg 1D example: gfedcb|abcdefgh|gfedcba 1D example: uvwxyz|absdefgh|ijklmnodo not look outside of ROINearest neighbor interpolation.Bilinear interpolation.Bicubic interpolation.Resampling using pixel area relation. It may be a preferred method for image decimation, as it gives moire'-free results. But when the image is zoomed, it is similar to the  method.+Lanczos interpolation over 8x8 neighborhoodENone+,9:;DLQRST[     None+,9:;DLQRST[FNone+,9:;DLQRST[   None+,9:;DLQRST[ Resize to an absolute size.?Resize with relative factors for both the width and the height.Resizes an image5To shrink an image, it will generally look best with N interpolation, whereas to enlarge an image, it will generally look best with  (slow) or  (faster but still looks OK).Example: resizeInterAreaImg :: Mat ('S ['D, 'D]) ('S 3) ('S Word8) resizeInterAreaImg = exceptError $ withMatM (h ::: w + (w `div` 2) ::: Z) (Proxy :: Proxy 3) (Proxy :: Proxy Word8) transparent $ \imgM -> do birds_resized <- pureExcept $ resize (ResizeRel $ pure 0.5) InterArea birds_768x512 matCopyToM imgM (V2 0 0) birds_768x512 Nothing matCopyToM imgM (V2 w 0) birds_resized Nothing lift $ arrowedLine imgM (V2 startX y) (V2 pointX y) red 4 LineType_8 0 0.15 where [h, w] = miShape $ matInfo birds_768x512 startX = round $ fromIntegral w * (0.95 :: Double) pointX = round $ fromIntegral w * (1.05 :: Double) y = h `div` 4  -doc/generated/examples/resizeInterAreaImg.pngresizeInterAreaImg ]http://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/geometric_transformations.html#resizeOpenCV Sphinx doc,Applies an affine transformation to an imageExample: rotateBirds :: Mat (ShapeT [2, 3]) ('S 1) ('S Double) rotateBirds = getRotationMatrix2D (V2 256 170 :: V2 CFloat) 45 0.75 warpAffineImg :: Birds_512x341 warpAffineImg = exceptError $ warpAffine birds_512x341 rotateBirds InterArea False False (BorderConstant black) warpAffineInvImg :: Birds_512x341 warpAffineInvImg = exceptError $ warpAffine warpAffineImg rotateBirds InterCubic True False (BorderConstant black)  doc/generated/birds_512x341.pngoriginal  (doc/generated/examples/warpAffineImg.png warpAffineImg +doc/generated/examples/warpAffineInvImg.pngwarpAffineInvImg ahttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/geometric_transformations.html#warpaffineOpenCV Sphinx doc0Applies a perspective transformation to an image fhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/geometric_transformations.html#warpperspectiveOpenCV Sphinx doc Inverts an affine transformation lhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/geometric_transformations.html#invertaffinetransformOpenCV Sphinx docKCalculates a perspective transformation matrix for 2D perspective transform nhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/geometric_transformations.html#getperspectivetransformOpenCV Sphinx doc*Calculates an affine matrix of 2D rotation jhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/geometric_transformations.html#getrotationmatrix2dOpenCV Sphinx doc9Applies a generic geometrical transformation to an image.GThe function remap transforms the source image using the specified map: dst(x,y) = src(map(x,y))Example: remapImg :: forall (width :: Nat) (height :: Nat) (channels :: Nat) (depth :: * ) . (Mat ('S ['S height, 'S width]) ('S channels) ('S depth) ~ Birds_512x341) => Mat ('S ['S height, 'S width]) ('S channels) ('S depth) remapImg = exceptError $ remap birds_512x341 transform InterLinear (BorderConstant black) where transform = exceptError $ matFromFunc (Proxy :: Proxy [height, width]) (Proxy :: Proxy 2) (Proxy :: Proxy Float) exampleFunc exampleFunc [_y, x] 0 = wobble x w exampleFunc [ y, _x] 1 = wobble y h exampleFunc _pos _channel = error "impossible" wobble :: Int -> Float -> Float wobble v s = let v' = fromIntegral v n = v' / s in v' + (s * 0.05 * sin (n * 2 * pi * 5)) w = fromInteger $ natVal (Proxy :: Proxy width) h = fromInteger $ natVal (Proxy :: Proxy height)  doc/generated/birds_512x341.pngoriginal #doc/generated/examples/remapImg.pngremapImg \http://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/geometric_transformations.html#remapOpenCV documentationUThe function transforms an image to compensate radial and tangential lens distortion.Those pixels in the destination image, for which there is no correspondent pixels in the source image, are filled with zeros (black color).HThe camera matrix and the distortion parameters can be determined using calibrateCamera . If the resolution of images is different from the resolution used at the calibration stage, f_x, f_y, c_x and c_y need to be scaled accordingly, while the distortion coefficients remain the same.Example: undistortImg :: forall (width :: Nat) (height :: Nat) (channels :: Nat) (depth :: * ) . (Mat ('S ['S height, 'S width]) ('S channels) ('S depth) ~ Birds_512x341) => Mat ('S ['S height, 'S width]) ('S channels) ('S depth) undistortImg = undistort birds_512x341 intrinsics coefficients where intrinsics :: M33 Float intrinsics = V3 (V3 15840.8 0 2049) (V3 0 15830.3 1097) (V3 0 0 1) coefficients :: Matx51d coefficients = unsafePerformIO $ newMatx51d (-2.239145913492247) 13.674526561736648 3.650187848850095e-2 (-2.0042015752853796e-2) (-0.44790921357620456)  doc/generated/birds_512x341.pngoriginal 'doc/generated/examples/undistortImg.png undistortImg    Source image.Affine transformation matrix.#Perform the inverse transformation.Fill outliers.Pixel extrapolation method.Transformed source image. Source image."Perspective transformation matrix.#Perform the inverse transformation.Fill outliers.Pixel extrapolation method.Transformed source image.HArray of 4 floating-point Points representing 4 vertices in source imageMArray of 4 floating-point Points representing 4 vertices in destination imageAThe output perspective transformation, 3x3 floating-point-matrix.+Center of the rotation in the source image.Rotation angle in degrees. Positive values mean counter-clockwise rotation (the coordinate origin is assumed to be the top-left corner).Isotropic scale factor.<The output affine transformation, 2x3 floating-point matrix. Source image. A map of (x, y) points.'Interpolation method to use. Note that $ is not supported by this function.The source image to undistort.'The 3x3 matrix of intrinsic parameters.qThe distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,x,y]]]]) of 4, 5, 8, 12 or 14 elements.     None+,9:;DLQRST[$An opening operation: dilate . erode#A closing operation: erode . dilate(A morphological gradient: dilate - erode"top hat": src - open"black hat": close - src"A rectangular structuring element.An elliptic structuring element, that is, a filled ellipse inscribed into the rectangle Rect(0, 0, esize.width, 0.esize.height).#A cross-shaped structuring element.$Calculates the Laplacian of an imageThe function calculates the Laplacian of the source image by adding up the second x and y derivatives calculated using the Sobel operator.Example: laplacianImg :: forall shape channels depth . (Mat shape channels depth ~ Birds_512x341) => Mat shape ('S 1) ('S Double) laplacianImg = exceptError $ do imgG <- cvtColor bgr gray birds_512x341 laplacian Nothing Nothing Nothing Nothing imgG  'doc/generated/examples/laplacianImg.png laplacianImg Phttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/filtering.html#laplacianOpenCV Sphinx doc&Blurs an image using the median filterExample: 8medianBlurImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Birds_512x341 , width2 ~ ((*) width 2) -- TODO (RvD): HSE parse error with infix type operator ) => Mat (ShapeT [height, width2]) ('S channels) ('S depth) medianBlurImg = exceptError $ withMatM (Proxy :: Proxy [height, width2]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do birdsBlurred <- pureExcept $ medianBlur birds_512x341 13 matCopyToM imgM (V2 0 0) birds_512x341 Nothing matCopyToM imgM (V2 w 0) birdsBlurred Nothing where w = fromInteger $ natVal (Proxy :: Proxy width)  (doc/generated/examples/medianBlurImg.png medianBlurImg Qhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/filtering.html#medianblurOpenCV Sphinx doc"Blurs an image using a box filter.Example: @boxBlurImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Birds_512x341 , width2 ~ ((*) width 2) -- TODO (RvD): HSE parse error with infix type operator ) => Mat (ShapeT [height, width2]) ('S channels) ('S depth) boxBlurImg = exceptError $ withMatM (Proxy :: Proxy [height, width2]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do birdsBlurred <- pureExcept $ blur (V2 13 13 :: V2 Int32) birds_512x341 matCopyToM imgM (V2 0 0) birds_512x341 Nothing matCopyToM imgM (V2 w 0) birdsBlurred Nothing where w = fromInteger $ natVal (Proxy :: Proxy width)  %doc/generated/examples/boxBlurImg.png boxBlurImg Khttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/filtering.html#blurOpenCV Sphinx doc7Erodes an image by using a specific structuring elementExample: MerodeImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Lambda , width2 ~ ((*) width 2) -- TODO (RvD): HSE parse error with infix type operator ) => Mat (ShapeT [height, width2]) ('S channels) ('S depth) erodeImg = exceptError $ withMatM (Proxy :: Proxy [height, width2]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do erodedLambda <- pureExcept $ erode lambda Nothing (Nothing :: Maybe Point2i) 5 BorderReplicate matCopyToM imgM (V2 0 0) lambda Nothing matCopyToM imgM (V2 w 0) erodedLambda Nothing where w = fromInteger $ natVal (Proxy :: Proxy width)  #doc/generated/examples/erodeImg.pngerodeImg Lhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/filtering.html#erodeOpenCV Sphinx doc#Convolves an image with the kernel.Example: gfilter2DImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Birds_512x341 , width2 ~ ((*) width 2) -- TODO (RvD): HSE parse error with infix type operator ) => Mat (ShapeT [height, width2]) ('S channels) ('S depth) filter2DImg = exceptError $ withMatM (Proxy :: Proxy [height, width2]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do filteredBird <- pureExcept $ filter2D birds_512x341 kernel (Nothing :: Maybe Point2i) 0 BorderReplicate matCopyToM imgM (V2 0 0) birds_512x341 Nothing matCopyToM imgM (V2 w 0) filteredBird Nothing where w = fromInteger $ natVal (Proxy :: Proxy width) kernel = exceptError $ withMatM (Proxy :: Proxy [3, 3]) (Proxy :: Proxy 1) (Proxy :: Proxy Double) black $ \imgM -> do lift $ line imgM (V2 0 0 :: V2 Int32) (V2 0 0 :: V2 Int32) (V4 (-2) (-2) (-2) 1 :: V4 Double) 0 LineType_8 0 lift $ line imgM (V2 1 0 :: V2 Int32) (V2 0 1 :: V2 Int32) (V4 (-1) (-1) (-1) 1 :: V4 Double) 0 LineType_8 0 lift $ line imgM (V2 1 1 :: V2 Int32) (V2 1 1 :: V2 Int32) (V4 1 1 1 1 :: V4 Double) 0 LineType_8 0 lift $ line imgM (V2 1 2 :: V2 Int32) (V2 2 1 :: V2 Int32) (V4 1 1 1 1 :: V4 Double) 0 LineType_8 0 lift $ line imgM (V2 2 2 :: V2 Int32) (V2 2 2 :: V2 Int32) (V4 2 2 2 1 :: V4 Double) 0 LineType_8 0  &doc/generated/examples/filter2DImg.png filter2DImg Ohttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/filtering.html#filter2dOpenCV Sphinx doc8Dilates an image by using a specific structuring elementExample: RdilateImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Lambda , width2 ~ ((*) width 2) -- TODO (RvD): HSE parse error with infix type operator ) => Mat (ShapeT [height, width2]) ('S channels) ('S depth) dilateImg = exceptError $ withMatM (Proxy :: Proxy [height, width2]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do dilatedLambda <- pureExcept $ dilate lambda Nothing (Nothing :: Maybe Point2i) 3 BorderReplicate matCopyToM imgM (V2 0 0) lambda Nothing matCopyToM imgM (V2 w 0) dilatedLambda Nothing where w = fromInteger $ natVal (Proxy :: Proxy width)  $doc/generated/examples/dilateImg.png dilateImg Mhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/filtering.html#dilateOpenCV Sphinx doc/Performs advanced morphological transformations Shttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/filtering.html#morphologyexOpenCV Sphinx docZReturns a structuring element of the specified size and shape for morphological operationsExample: type StructureImg = Mat (ShapeT [128, 128]) ('S 1) ('S Word8) structureImg :: MorphShape -> StructureImg structureImg shape = exceptError $ do mat <- getStructuringElement shape (Proxy :: Proxy 128) (Proxy :: Proxy 128) img <- matConvertTo (Just 255) Nothing mat bitwiseNot img morphRectImg :: StructureImg morphRectImg = structureImg MorphRect morphEllipseImg :: StructureImg morphEllipseImg = structureImg MorphEllipse morphCrossImg :: StructureImg morphCrossImg = structureImg $ MorphCross $ toPoint (pure (-1) :: V2 Int32)  'doc/generated/examples/morphRectImg.png morphRectImg  *doc/generated/examples/morphEllipseImg.pngmorphEllipseImg (doc/generated/examples/morphCrossImg.png morphCrossImg \http://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/filtering.html#getstructuringelementOpenCV Sphinx doc'  tAperture size used to compute the second-derivative filters. The size must be positive and odd. Default value is 1.MOptional scale factor for the computed Laplacian values. Default value is 1.GOptional delta value that is added to the results. Default value is 0.Pixel extrapolation method.SInput 1-, 3-, or 4-channel image; when ksize is 3 or 5, the image depth should be  ,  , or  -, for larger aperture sizes, it can only be  .QAperture linear size; it must be odd and greater than 1, for example: 3, 5, 7...Blurring kernel size.Blurring kernel size.sigmaXsigmaY Input image.)Structuring element used for erosion. If  is used a 3x3G rectangular structuring element is used. Kernel can be created using .anchor iterations Input image.convolution kernel (or rather a correlation kernel), a single-channel floating point matrix; if you want to apply different kernels to different channels, split the image into separate color planes using split and process them individually.anchordelta Input image.*Structuring element used for dilation. If  is used a 3x3G rectangular structuring element is used. Kernel can be created using .anchor iterations Source image."Type of a morphological operation.Structuring element. Anchor position with the kernel.1Number of times erosion and dilation are applied.   None+,9:;DLQRST[   !None+,9:;DLQRST[Example: carAnim :: Animation (ShapeT [240, 320]) ('S 3) ('S Word8) carAnim = carOverhead mog2Anim :: IO (Animation (ShapeT [240, 320]) ('S 3) ('S Word8)) mog2Anim = do mog2 <- newBackgroundSubtractorMOG2 Nothing Nothing Nothing forM carOverhead $ (delay, img) -> do fg <- bgSubApply mog2 0.1 img fgBgr <- exceptErrorIO $ pureExcept $ cvtColor gray bgr fg pure (delay, fgBgr)  Original: doc/generated/examples/car.gifcarAnim Foreground: doc/generated/examples/mog2.gifmog2Anim     Length of the history.Threshold on the squared distance between the pixel and the sample to decide whether a pixel is close to that sample. This parameter does not affect the background update.If  , the algorithm will detect shadows and mark them. It decreases the speed a bit, so if you do not need this feature, set the parameter to  .Length of the history.Threshold on the squared Mahalanobis distance between the pixel and the model to decide whether a pixel is well described by the background model. This parameter does not affect the background update.If  , the algorithm will detect shadows and mark them. It decreases the speed a bit, so if you do not need this feature, set the parameter to  .     "None+,9:;DLQRST[9The API might change in the future, but currently we can:Open/create a new file:  wr <-  $  (I "tst.MOV" "avc1" 30 (3840, 2160) ) kNow, we can write some frames, but they need to have exactly the same size as the one we have opened with:   $  wr img =We need to close at the end or it will not finalize the file:   $  wr           None+,9:;DLQRST[Flip around the x-axis.Flip around the y-axis.Flip around both x and y-axis.4Calculates an absolute value of each matrix element. Rhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#absOpenCV Sphinx docBCalculates the per-element absolute difference between two arrays.Example: vmatAbsDiffImg :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) matAbsDiffImg = matAbsDiff flower_512x341 sailboat_512x341  (doc/generated/examples/matAbsDiffImg.png matAbsDiffImg Vhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#absdiffOpenCV Sphinx doc-Calculates the per-element sum of two arrays.Example: jmatAddImg :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) matAddImg = matAdd flower_512x341 sailboat_512x341  $doc/generated/examples/matAddImg.png matAddImg Rhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#addOpenCV Sphinx doc 8Calculates the per-element difference between two arraysExample: ymatSubtractImg :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) matSubtractImg = matSubtract flower_512x341 sailboat_512x341  )doc/generated/examples/matSubtractImg.pngmatSubtractImg Whttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#subtractOpenCV Sphinx doc )Calculates the weighted sum of two arraysExample: matAddWeightedImg :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) matAddWeightedImg = exceptError $ matAddWeighted flower_512x341 0.5 sailboat_512x341 0.5 0.0  ,doc/generated/examples/matAddWeightedImg.pngmatAddWeightedImg Zhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#addweightedOpenCV Sphinx doc 7Calculates the sum of a scaled array and another array.The function scaleAdd is one of the classical primitive linear algebra operations, known as DAXPY or SAXPY in BLAS. It calculates the sum of a scaled array and another array. Whttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#scaleaddOpenCV Sphinx docExample:  bitwiseNotImg :: Mat (ShapeT VennShape) ('S 3) ('S Word8) bitwiseNotImg = exceptError $ do img <- bitwiseNot vennCircleAImg imgBgr <- cvtColor gray bgr img createMat $ do imgM <- lift $ thaw imgBgr lift $ vennCircleA imgM blue 2 pure imgM  (doc/generated/examples/bitwiseNotImg.png bitwiseNotImg Zhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#bitwise-notOpenCV Sphinx docExample: AbitwiseAndImg :: Mat (ShapeT VennShape) ('S 3) ('S Word8) bitwiseAndImg = exceptError $ do img <- bitwiseAnd vennCircleAImg vennCircleBImg imgBgr <- cvtColor gray bgr img createMat $ do imgM <- lift $ thaw imgBgr lift $ vennCircleA imgM blue 2 lift $ vennCircleB imgM red 2 pure imgM  (doc/generated/examples/bitwiseAndImg.png bitwiseAndImg Zhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#bitwise-andOpenCV Sphinx docExample: >bitwiseOrImg :: Mat (ShapeT VennShape) ('S 3) ('S Word8) bitwiseOrImg = exceptError $ do img <- bitwiseOr vennCircleAImg vennCircleBImg imgBgr <- cvtColor gray bgr img createMat $ do imgM <- lift $ thaw imgBgr lift $ vennCircleA imgM blue 2 lift $ vennCircleB imgM red 2 pure imgM  'doc/generated/examples/bitwiseOrImg.png bitwiseOrImg Yhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#bitwise-orOpenCV Sphinx docExample: AbitwiseXorImg :: Mat (ShapeT VennShape) ('S 3) ('S Word8) bitwiseXorImg = exceptError $ do img <- bitwiseXor vennCircleAImg vennCircleBImg imgBgr <- cvtColor gray bgr img createMat $ do imgM <- lift $ thaw imgBgr lift $ vennCircleA imgM blue 2 lift $ vennCircleB imgM red 2 pure imgM  (doc/generated/examples/bitwiseXorImg.png bitwiseXorImg Zhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#bitwise-xorOpenCV Sphinx docBCreates one multichannel array out of several single-channel ones. Thttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#mergeOpenCV Sphinx docADivides a multi-channel array into several single-channel arrays.Example: matSplitImg :: forall (width :: Nat) (width3 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Birds_512x341 , width3 ~ ((*) width 3) ) => Mat (ShapeT [height, width3]) ('S channels) ('S depth) matSplitImg = exceptError $ do zeroImg <- mkMat (Proxy :: Proxy [height, width]) (Proxy :: Proxy 1) (Proxy :: Proxy depth) black let blueImg = matMerge $ V.fromList [channelImgs V.! 0, zeroImg, zeroImg] greenImg = matMerge $ V.fromList [zeroImg, channelImgs V.! 1, zeroImg] redImg = matMerge $ V.fromList [zeroImg, zeroImg, channelImgs V.! 2] withMatM (Proxy :: Proxy [height, width3]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do matCopyToM imgM (V2 (w*0) 0) (unsafeCoerceMat blueImg) Nothing matCopyToM imgM (V2 (w*1) 0) (unsafeCoerceMat greenImg) Nothing matCopyToM imgM (V2 (w*2) 0) (unsafeCoerceMat redImg) Nothing where channelImgs = matSplit birds_512x341 w :: Int32 w = fromInteger $ natVal (Proxy :: Proxy width)  &doc/generated/examples/matSplitImg.png matSplitImg Thttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#splitOpenCV Sphinx doc4Apply the same 1 dimensional action to every channel0Finds the global minimum and maximum in an array Xhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#minmaxlocOpenCV Sphinx doc!Calculates an absolute array norm Shttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#normOpenCV Sphinx docECalculates an absolute difference norm, or a relative difference norm Shttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#normOpenCV Sphinx doc.Normalizes the norm or value range of an array Xhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#normalizeOpenCV Sphinx doc$Calculates the sum of array elementsExample: matSumImg :: Mat (ShapeT [201, 201]) ('S 3) ('S Word8) matSumImg = exceptError $ withMatM (Proxy :: Proxy [201, 201]) (Proxy :: Proxy 3) (Proxy :: Proxy Word8) black $ \imgM -> do -- Draw a filled circle. Each pixel has a value of (255,255,255) lift $ circle imgM (pure radius :: V2 Int32) radius white (-1) LineType_8 0 -- Calculate the sum of all pixels. scalar <- matSumM imgM let V4 area _y _z _w = fromScalar scalar :: V4 Double -- Circle area = pi * radius * radius let approxPi = area / 255 / (radius * radius) lift $ putText imgM (T.pack $ show approxPi) (V2 40 110 :: V2 Int32) (Font FontHersheyDuplex NotSlanted 1) blue 1 LineType_AA False where radius :: forall a. Num a => a radius = 100  $doc/generated/examples/matSumImg.png matSumImg Rhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#sumOpenCV Sphinx doc:Calculates a mean and standard deviation of array elements Yhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#meanstddevOpenCV Sphinx doc<Flips a 2D matrix around vertical, horizontal, or both axes._The example scenarios of using the function are the following: Vertical flipping of the image () to switch between top-left and bottom-left image origin. This is a typical operation in video processing on Microsoft Windows* OS. Horizontal flipping of the image with the subsequent horizontal shift and absolute difference calculation to check for a vertical-axis symmetry (). Simultaneous horizontal and vertical flipping of the image with the subsequent shift and absolute difference calculation to check for a central symmetry ((). Reversing the order of point arrays ( or ).Example: gmatFlipImg :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) matFlipImg = matFlip sailboat_512x341 FlipBoth  %doc/generated/examples/matFlipImg.png matFlipImgTransposes a matrix.Example: mmatTransposeImg :: Mat (ShapeT [512, 341]) ('S 3) ('S Word8) matTransposeImg = matTranspose sailboat_512x341  *doc/generated/examples/matTransposeImg.pngmatTransposeImg7                   ! " # $ %  src1alphasrc2betagamma First input array.!Scale factor for the first array.Second input array.  OOptional operation mask; it must have the same size as the input array, depth  and 1 channel. Input array.Calculated norm.Absolute or relative norm.OOptional operation mask; it must have the same size as the input array, depth  and 1 channel.First input array.:Second input array of the same size and type as the first.Calculated norm.[Norm value to normalize to or the lower range boundary in case of the range normalization.dUpper range boundary in case of the range normalization; it is not used for the norm normalization.Optional operation mask. Input array.0Input array that must have from 1 to 4 channels.0Input array that must have from 1 to 4 channels.Optional operation mask. How to flip. &0     0     4                   ! " # $ %      &#None+,9:;DLQRST[(KCalculates a fundamental matrix from the corresponding points in two images5The minimum number of points required depends on the #.$: N == 7%: N >= 8&: N >= 15': N >= 8With 7 points the $* method is used, despite the given method.With more than 7 points the $ method will be replaced by the % method.Between 7 and 15 points the & method will be replaced by the ' method. With the $ method and with 7 points the result can contain up to 3 matrices, resulting in either 3, 6 or 9 rows. This is why the number of resulting rows in tagged as @ynamic. For all other methods the result always contains 3 rows. xhttp://docs.opencv.org/3.0-last-rst/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findfundamentalmatOpenCV Sphinx doc)_For points in an image of a stereo pair, computes the corresponding epilines in the other image http://docs.opencv.org/3.0-last-rst/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#computecorrespondepilinesOpenCV Sphinx doc ' ( !"#$%&' ) *(Points from the first image.Points from the second image.)Points. Image which contains the points.Fundamental matrix.  !"#$%&'() #$%&' !"() ' ( !"#$%&' ) *()GNone+,9:;DLQRST[ monpqrstuvwxyz{|}~ #(-27<AFGHIJKLMNOPQRSTUVWXZY[\]^`abcdfghijklopqrstuvxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijkoptuyz~  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNQRSTUVWXYZ[\]^_`abcfghijklmnopqrstuvwxyz      !"#$%&'()*+,-./012;<=>?@ABCDEFGHIJKLWXYZ[\]^_`abcdefghijkopqrstuvwxyz{|}~      !"#$%&'()$None$+,9:;ADLOQRST[..An OpenCV 2D-filter preserving the matrix type/An OpenCV bidimensional matrix0'map Pixel types to a number of channels1map Pixel types to a depth25Compute an OpenCV 2D-matrix from a JuicyPixels image.Example: /fromImageImg :: IO (Mat ('S '[ 'D, 'D]) ('S 3) ('S Word8)) fromImageImg = do r <- Codec.Picture.readImage "data/Lenna.png" case r of Left err -> error err Right (Codec.Picture.ImageRGB8 img) -> pure $ OpenCV.Juicy.fromImage img Right _ -> error "Unhandled JuicyPixels format!"  'doc/generated/examples/fromImageImg.png fromImageImg34Compute a JuicyPixels image from an OpenCV 2D-matrix>FIXME: There's a bug in the colour conversions in the example:Example: toImageImg :: IO (Mat ('S '[ 'D, 'D]) ('S 3) ('S Word8)) toImageImg = exceptError . cvtColor rgb bgr . from . to . exceptError . cvtColor bgr rgb <$> fromImageImg where to :: OpenCV.Juicy.Mat2D 'D 'D ('S 3) ('S Word8) -> Codec.Picture.Image Codec.Picture.PixelRGB8 to = OpenCV.Juicy.toImage from :: Codec.Picture.Image Codec.Picture.PixelRGB8 -> OpenCV.Juicy.Mat2D 'D 'D ('S 3) ('S Word8) from = OpenCV.Juicy.fromImage  %doc/generated/examples/toImageImg.png toImageImg4_Apply an OpenCV 2D-filter to a JuicyPixels dynamic matrix, preserving the Juicy pixel encoding./01 + , -2JuicyPixels image3OpenCV 2D-matrix4OpenCV 2D-filterJuicyPixels dynamic image .56789:;./01234/.01234./01 + , -234 .56789:; /HIJKLMNOPQRSTUVWWXXYZ[\]^_`abcdefghi&j&k&l&m&n&o&p&q&r&s&t&u&v&w&x&y&z&{&|&}&~&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&(((((((***************************************++++++++++++++..................000111111122 2 2 2 2 22333334444445====== =!="=#=$=%=&='=(="=)=*=+=, - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~        >>>>>>>>???????????????????????????????         @@@@@@@@@@@@@@@@BBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC C C C C CCCCCCCCCCCCCCCCCCC C!C"C#C$C%C&C'C(C)C*C+C,C-C.C/C0C1C2C3C4C5C6C7C8C9C:C;C<C=C>C?C@CACBCCCDCECFCGCHCICJCKCLCMCNCOCPCQCRCSCTCUCVCWCXCYCZC[C\C]C^C_C`CaCbCcCdCeCfCgChCiCjCkClCmCnCoCpCqCrCsCtCuCvCwCxCyCzC{C|C}C~CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUDVDWDXDYDZD[D\]^_`abcdefgghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQQRSTUVWXYZ[\]^_`abcdefghijklmmnopqrsstuvwxyz{|}~EEEEEEEEEEEEEEEEEEE                          !!!!!!!!!!!!!!!"""" "!"""#"$"%"&"'"(")"*"+,-./0123456789:;<=>?@ABCDEFGHIJK#L#M#N#O#P#Q#R#S#T#U#V#W#X#Y$Z$[$\$]$^$_$`$a$b$c$d$e$f$ghijk%l%m%n%o%p%q%r%s%t%u%v%w%x%y%z%{%|%}%~%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%&&&&'''''''(())***************************++++++++++++++++++,,,,,,,,,,,,,,,---------------.. . . . . ..........////00000 0!0"0#0$0%0&0'0(0)0*0+0,0-0.0/0001020304hi5h6708090:0;0<0=0>0?0@0A0B0C0D0E0F0G0H0I0J0K0L0M0N0O0P0Q0R0S0T0U0V0W0X0Y0Z0[0\0]0^0_0`0a0b0c0d0e0f0g0h0i0j0k0l0m0n0o0p0q0r0s0t0u0v0w0x0y0z0{0|0}0~000001112222233334444445567789:;<===============                                                                                           > >h>>>>>>>>>>>>>>>>>>>> >!>">#>$>%>&>'>(>)>*>+>,>->.>/>0>1>2>3>4?5h67h68?9?:?;<=>???@?A?B?C?D?E?F?G?H?I?J??K?L?M?N?O?P?Q?R?S?T?U?V?W X Y Z [ \ ]@@^@_@`@a@b@c@dAefghijklmnopqrstuvwxyz{|}~BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB B B B B BBBBBBBBBBBBBBBBBBB B!B"B#B$B%B&B'B(B)B*B+B,B-B.B/B0B1B2B3B4B5B6B7B8B9B:B;B<B=B>B?B@BABBBCBDBEBFBGBHBIBJBKBLBMBNBOBPBQBRBSBTBUCVCWCXCYCZC[C\C]C^C_C`CaCbCcCdCeCfCgChCiCjCkClCmCnCoCpCqCrCsCtCuCvCwCxCyCzC{C|C}C~CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC C C C C C C C C C C C C C C C C C C C C C C C C C C C C C     ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~                                         D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D              n|               ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i i j k l m n o o p q r s t u v w x y z { | } ~    -                                                     h  h                                    EE E E E E E E E F F F F F F F F F F F F F F                                     ! ! ! ! ! ! ! ! ! ! ! ! !!! "!! #  $" %" &" '" (" )"%" * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C# D# E# F# G$ H$ I$ J$ K L%opencv-0.0.0.0-DCLXR8qMRB1DJJVpBRrxNSOpenCV.TypeLevelOpenCV.VideoIO.TypesOpenCV.Core.TypesOpenCV.Core.Types.Mat OpenCV.UnsafeOpenCV.ImgCodecsOpenCV.Core.ArrayOpsOpenCV.Core.Types.MatxOpenCV.Core.Types.PointOpenCV.Core.Types.SizeOpenCV.Core.Types.VecOpenCV.Exception OpenCV.PhotoOpenCV.Core.Types.RectOpenCV.ImgProc.MiscImgTransform*OpenCV.ImgProc.MiscImgTransform.ColorCodesOpenCV.Core.Types.Mat.RepaOpenCV.HighGui OpenCV.VideoOpenCV.VideoIO.VideoCaptureOpenCV.Features2d OpenCV.ImgProc.CascadeClassifierOpenCV.ImgProc.ColorMapsOpenCV.ImgProc.DrawingOpenCV.ImgProc.FeatureDetectionOpenCV.ImgProc.ObjectDetection!OpenCV.ImgProc.StructuralAnalysisOpenCV.ImgProc.TypesOpenCV.Core.Types.Mat.HMat$OpenCV.ImgProc.GeometricImgTransformOpenCV.ImgProc.ImgFiltering OpenCV.JSONOpenCV.Video.MotionAnalysisOpenCV.VideoIO.VideoWriterOpenCV.Calib3d OpenCV.Juicy!OpenCV.Internal.VideoIO.ConstantsOpenCV.Internal.VideoIO.TypesOpenCV.Internal.Photo.ConstantsOpenCV.Internal.Mutable2OpenCV.Internal.ImgProc.MiscImgTransform.TypeLevelOpenCV.Internal.ImgCodecs$OpenCV.Internal.Core.Types.Mat.Depth&OpenCV.Internal.Core.Types.Mat.Marshal$OpenCV.Internal.Core.Types.ConstantsOpenCV.Internal.Core.ArrayOps!OpenCV.Internal.Calib3d.ConstantsOpenCV.Internal.C.TypesOpenCV.Internal.Core.Types.Matx OpenCV.Internal.Core.Types.PointOpenCV.Internal.Core.Types.SizeOpenCV.Internal.Core.Types.VecOpenCV.Internal.C.PlacementNew!OpenCV.Internal.C.PlacementNew.THOpenCV.Internal.C.InlineOpenCV.Internal"OpenCV.Internal.Core.Types.Matx.TH#OpenCV.Internal.Core.Types.Point.TH"OpenCV.Internal.Core.Types.Size.TH!OpenCV.Internal.Core.Types.Vec.THOpenCV.Internal.ExceptionOpenCV.Internal.Core.TypesOpenCV.Internal.Core.Types.MatOpenCV.Internal.Core.Types.Rect"OpenCV.Internal.Core.Types.Rect.TH(OpenCV.Internal.ImgProc.MiscImgTransform3OpenCV.Internal.ImgProc.MiscImgTransform.ColorCodes%OpenCV.Internal.Core.Types.Mat.ToFrom#OpenCV.Internal.Core.Types.Mat.HMatOpenCV.Internal.ImgProc.TypesOpenCVIsStaticAllMayRelaxRelaxDSNatsDSNatInElemLength ToNatListDS toNatListDSToNatDStoNatDSToInt32toInt32:::ZDSDS dsToMaybe $fIsStaticaS$fAllap: $fAllkp[]$fPrivateIsStaticaS$fToNatListDSProxy$fToNatListDSproxy$fToNatDSProxy$fToNatDSproxy$fToInt32proxy$fToInt32Int32$fShowDS$fEqDS $fFunctorDSVideoCaptureAPI VideoCapAny VideoCapVfw VideoCapV4l VideoCapV4l2VideoCapFirewireVideoCapFirewareVideoCapIeee1394VideoCapDc1394VideoCapCmu1394 VideoCapQtVideoCapUnicap VideoCapDshow VideoCapPvapiVideoCapOpenniVideoCapOpenniAsusVideoCapAndroid VideoCapXiapiVideoCapAvfoundationVideoCapGiganetix VideoCapMsmf VideoCapWinrtVideoCapIntelpercVideoCapOpenni2VideoCapOpenni2AsusVideoCapGphoto2VideoCapGstreamerVideoCapFfmpegVideoCapImagesVideoCapturePropertiesVideoCapPropPosMsecVideoCapPropPosFramesVideoCapPropPosAviRatioVideoCapPropFrameWidthVideoCapPropFrameHeightVideoCapPropFpsVideoCapPropFourCcVideoCapPropFrameCountVideoCapPropFormatVideoCapPropModeVideoCapPropBrightnessVideoCapPropContrastVideoCapPropSaturationVideoCapPropHueVideoCapPropGainVideoCapPropExposureVideoCapPropConvertRgbVideoCapPropWhiteBalanceBlueUVideoCapPropRectificationVideoCapPropMonochromeVideoCapPropSharpnessVideoCapPropAutoExposureVideoCapPropGammaVideoCapPropTemperatureVideoCapPropTriggerVideoCapPropTriggerDelayVideoCapPropWhiteBalanceRedVVideoCapPropZoomVideoCapPropFocusVideoCapPropGuidVideoCapPropIsoSpeedVideoCapPropBacklightVideoCapPropPanVideoCapPropTiltVideoCapPropRollVideoCapPropIrisVideoCapPropSettingsVideoCapPropBuffersizeVideoCapPropAutofocusVideoCapPropIntFourCCunFourCC FreezeThawfreezethaw unsafeFreeze unsafeThawMutableMut OutputFormat OutputBmp OutputExr OutputHdr OutputJpegOutputJpeg2000 OutputPng OutputPxm OutputSunras OutputTiff OutputWebP PngParamspngParamCompressionpngParamStrategypngParamBinaryLevel PngStrategyPngStrategyDefaultPngStrategyFilteredPngStrategyHuffmanOnlyPngStrategyRLEPngStrategyFixed JpegParamsjpegParamQualityjpegParamProgressivejpegParamOptimizejpegParamRestartIntervaljpegParamLumaQualityjpegParamChromaQuality ImreadModeImreadUnchangedImreadGrayscale ImreadColorImreadAnyDepthImreadAnyColorImreadLoadGdaldefaultJpegParamsdefaultPngParamsDepthT ToDepthDS toDepthDSToDepthtoDepthDepthDepth_8UDepth_8S Depth_16U Depth_16S Depth_32S Depth_32F Depth_64FDepth_USRTYPE1 NormAbsRel NormRelative NormAbsoluteNormTypeNorm_InfNorm_L1Norm_L2 Norm_L2SQR Norm_Hamming Norm_Hamming2 Norm_MinMaxCmpTypeCmp_EqCmp_GtCmp_GeCmp_LtCmp_LeCmp_NeFromPtrWithPtrCSizeOfIsMatxtoMatxfromMatxtoMatxIOMatxDimCMatxDimRMatxIsPoint3IsPoint2IsPointtoPoint fromPoint toPointIOPointDimPointIsSizetoSizefromSizetoSizeIOSizeIsVectoVecfromVectoVecIOVecDimVec PlacementNew CvExceptTCvExceptCvCppExceptionExpectationError expectedValue actualValueCoerceMatError ShapeError SizeError ChannelError DepthError CvExceptionBindingException pureExcept exceptError exceptErrorIO exceptErrorMVec2i$fPlacementNewC'Vec$fIsVecV2Int32$fIsVecVecInt32 $fFromPtrVecVec2f$fPlacementNewC'Vec0$fIsVecV2CFloat$fIsVecVecCFloat $fFromPtrVec0Vec2d$fPlacementNewC'Vec1$fIsVecV2CDouble$fIsVecVecCDouble $fFromPtrVec1Vec3i$fPlacementNewC'Vec2$fIsVecV3Int32$fIsVecVecInt320 $fFromPtrVec2Vec3f$fPlacementNewC'Vec3$fIsVecV3CFloat$fIsVecVecCFloat0 $fFromPtrVec3Vec3d$fPlacementNewC'Vec4$fIsVecV3CDouble$fIsVecVecCDouble0 $fFromPtrVec4Vec4i$fPlacementNewC'Vec5$fIsVecV4Int32$fIsVecVecInt321 $fFromPtrVec5Vec4f$fPlacementNewC'Vec6$fIsVecV4CFloat$fIsVecVecCFloat1 $fFromPtrVec6Vec4d$fPlacementNewC'Vec7$fIsVecV4CDouble$fIsVecVecCDouble1 $fFromPtrVec7Size2i$fPlacementNewC'Size$fIsSizeV2Int32$fIsSizeSizeInt32 $fFromPtrSizeSize2f$fPlacementNewC'Size0$fIsSizeV2CFloat$fIsSizeSizeCFloat$fFromPtrSize0Size2d$fPlacementNewC'Size1$fIsSizeV2CDouble$fIsSizeSizeCDouble$fFromPtrSize1Point2i$fPlacementNewC'Point$fIsPointV2Int32$fIsPointPointInt32$fFromPtrPointPoint2f$fPlacementNewC'Point0$fIsPointV2CFloat$fIsPointPointCFloat$fFromPtrPoint0Point2d$fPlacementNewC'Point1$fIsPointV2CDouble$fIsPointPointCDouble$fFromPtrPoint1Point3i$fPlacementNewC'Point2$fIsPointV3Int32$fIsPointPointInt320$fFromPtrPoint2Point3f$fPlacementNewC'Point3$fIsPointV3CFloat$fIsPointPointCFloat0$fFromPtrPoint3Point3d$fPlacementNewC'Point4$fIsPointV3CDouble$fIsPointPointCDouble0$fFromPtrPoint4 FromScalar fromScalarToScalartoScalarRange TermCriteria RotatedRectScalar ToChannelsDS ToChannels ToShapeDS toShapeDSToShapetoShape ChannelsTShapeTMatInfomiShapemiDepth miChannelsMat typeCheckMatrelaxMat coerceMatunsafeCoerceMatmkMatcloneMat typeCheckMatM relaxMatM coerceMatMunsafeCoerceMatMmkMatM createMatwithMatM cloneMatMmatInfo toChannels toChannelsDS unsafeRead unsafeWriteInpaintingMethodInpaintNavierStokes InpaintTeleainpaintfastNlMeansDenoisingColored fastNlMeansDenoisingColoredMulti denoise_TVL1decolor$fShowInpaintingMethodIsRecttoRectfromRecttoRectIO rectTopLeftrectBottomRightrectSizerectArea rectContainsRectSize RectPointHRect hRectTopLeft hRectSizeRectRect2i$fPlacementNewC'Rect$fIsRectHRectInt32$fIsRectRectInt32 $fFromPtrRectRect2f$fPlacementNewC'Rect0$fIsRectHRectCFloat$fIsRectRectCFloat$fFromPtrRect0Rect2dfmapRect$fPlacementNewC'Rect1$fIsRectHRectCDouble$fIsRectRectCDouble$fFromPtrRect1GrabCutOperationModeGrabCut_InitWithRectGrabCut_InitWithMaskGrabCut_InitWithRectAndMask GrabCut_Eval ThreshValue ThreshVal_AbsThreshVal_OtsuThreshVal_Triangle ThreshType Thresh_BinaryThresh_BinaryInvThresh_Truncate Thresh_ToZeroThresh_ToZeroInvColorCodeDepthColorCodeMatchesChannelsColorCodeChannels ColorCodeBayerBGBayerGBBayerGRBayerRGBGRBGR555BGR565BGRA BGRA_I420 BGRA_IYUV BGRA_NV12 BGRA_NV21 BGRA_UYNV BGRA_UYVY BGRA_Y422 BGRA_YUNV BGRA_YUY2 BGRA_YUYV BGRA_YV12 BGRA_YVYUBGR_EABGR_FULLBGR_I420BGR_IYUVBGR_NV12BGR_NV21BGR_UYNVBGR_UYVYBGR_VNGBGR_Y422BGR_YUNVBGR_YUY2BGR_YUYVBGR_YV12BGR_YVYUGRAYGRAY_420 GRAY_I420 GRAY_IYUV GRAY_NV12 GRAY_NV21 GRAY_UYNV GRAY_UYVY GRAY_Y422 GRAY_YUNV GRAY_YUY2 GRAY_YUYV GRAY_YV12 GRAY_YVYUHLSHLS_FULLHSVHSV_FULLLabLBGRLRGBLuvMRGBARGBRGBA RGBA_I420 RGBA_IYUV RGBA_NV12 RGBA_NV21 RGBA_UYNV RGBA_UYVY RGBA_Y422 RGBA_YUNV RGBA_YUY2 RGBA_YUYV RGBA_YV12 RGBA_YVYURGB_EARGB_FULLRGB_I420RGB_IYUVRGB_NV12RGB_NV21RGB_UYNVRGB_UYVYRGB_VNGRGB_Y422RGB_YUNVRGB_YUY2RGB_YUYVRGB_YV12RGB_YVYUXYZYCrCbYUVYUV420pYUV420spYUV_I420YUV_IYUVYUV_YV12ColorConversionbayerBGbayerGBbayerGRbayerRGbgrbgr555bgr565bgra bgra_I420 bgra_IYUV bgra_NV12 bgra_NV21 bgra_UYNV bgra_UYVY bgra_Y422 bgra_YUNV bgra_YUY2 bgra_YUYV bgra_YV12 bgra_YVYUbgr_EAbgr_FULLbgr_I420bgr_IYUVbgr_NV12bgr_NV21bgr_UYNVbgr_UYVYbgr_VNGbgr_Y422bgr_YUNVbgr_YUY2bgr_YUYVbgr_YV12bgr_YVYUgraygray_420 gray_I420 gray_IYUV gray_NV12 gray_NV21 gray_UYNV gray_UYVY gray_Y422 gray_YUNV gray_YUY2 gray_YUYV gray_YV12 gray_YVYUhlshls_FULLhsvhsv_FULLlablbgrlrgbluvmrgbargbrgba rgba_I420 rgba_IYUV rgba_NV12 rgba_NV21 rgba_UYNV rgba_UYVY rgba_Y422 rgba_YUNV rgba_YUY2 rgba_YUYV rgba_YV12 rgba_YVYUrgb_EArgb_FULLrgb_I420rgb_IYUVrgb_NV12rgb_NV21rgb_UYNVrgb_UYVYrgb_VNGrgb_Y422rgb_YUNVrgb_YUY2rgb_YUYVrgb_YV12rgb_YVYUxyzyCrCbyuvyuv420pyuv420spyuv_I420yuv_IYUVyuv_YV12Matx12f newMatx12f$fPlacementNewC'Matx$fIsMatxMatxCFloat $fFromPtrMatxMatx12d newMatx12d$fPlacementNewC'Matx0$fIsMatxMatxCDouble$fFromPtrMatx0Matx13f newMatx13f$fPlacementNewC'Matx1$fIsMatxMatxCFloat0$fFromPtrMatx1Matx13d newMatx13d$fPlacementNewC'Matx2$fIsMatxMatxCDouble0$fFromPtrMatx2Matx14f newMatx14f$fPlacementNewC'Matx3$fIsMatxMatxCFloat1$fFromPtrMatx3Matx14d newMatx14d$fPlacementNewC'Matx4$fIsMatxMatxCDouble1$fFromPtrMatx4Matx16f newMatx16f$fPlacementNewC'Matx5$fIsMatxMatxCFloat2$fFromPtrMatx5Matx16d newMatx16d$fPlacementNewC'Matx6$fIsMatxMatxCDouble2$fFromPtrMatx6Matx21f newMatx21f$fPlacementNewC'Matx7$fIsMatxMatxCFloat3$fFromPtrMatx7Matx21d newMatx21d$fPlacementNewC'Matx8$fIsMatxMatxCDouble3$fFromPtrMatx8Matx22f newMatx22f$fPlacementNewC'Matx9$fIsMatxMatxCFloat4$fFromPtrMatx9Matx22d newMatx22d$fPlacementNewC'Matx10$fIsMatxMatxCDouble4$fFromPtrMatx10Matx23f newMatx23f$fPlacementNewC'Matx11$fIsMatxMatxCFloat5$fFromPtrMatx11Matx23d newMatx23d$fPlacementNewC'Matx12$fIsMatxMatxCDouble5$fFromPtrMatx12Matx31f newMatx31f$fPlacementNewC'Matx13$fIsMatxMatxCFloat6$fFromPtrMatx13Matx31d newMatx31d$fPlacementNewC'Matx14$fIsMatxMatxCDouble6$fFromPtrMatx14Matx32f newMatx32f$fPlacementNewC'Matx15$fIsMatxMatxCFloat7$fFromPtrMatx15Matx32d newMatx32d$fPlacementNewC'Matx16$fIsMatxMatxCDouble7$fFromPtrMatx16Matx33f newMatx33f$fPlacementNewC'Matx17$fIsMatxMatxCFloat8$fFromPtrMatx17Matx33d newMatx33d$fPlacementNewC'Matx18$fIsMatxMatxCDouble8$fFromPtrMatx18Matx34f newMatx34f$fPlacementNewC'Matx19$fIsMatxMatxCFloat9$fFromPtrMatx19Matx34d newMatx34d$fPlacementNewC'Matx20$fIsMatxMatxCDouble9$fFromPtrMatx20Matx41f newMatx41f$fPlacementNewC'Matx21$fIsMatxMatxCFloat10$fFromPtrMatx21Matx41d newMatx41d$fPlacementNewC'Matx22$fIsMatxMatxCDouble10$fFromPtrMatx22Matx43f newMatx43f$fPlacementNewC'Matx23$fIsMatxMatxCFloat11$fFromPtrMatx23Matx43d newMatx43d$fPlacementNewC'Matx24$fIsMatxMatxCDouble11$fFromPtrMatx24Matx44f newMatx44f$fPlacementNewC'Matx25$fIsMatxMatxCFloat12$fFromPtrMatx25Matx44d newMatx44d$fPlacementNewC'Matx26$fIsMatxMatxCDouble12$fFromPtrMatx26Matx51f newMatx51f$fPlacementNewC'Matx27$fIsMatxMatxCFloat13$fFromPtrMatx27Matx51d newMatx51d$fPlacementNewC'Matx28$fIsMatxMatxCDouble13$fFromPtrMatx28Matx61f newMatx61f$fPlacementNewC'Matx29$fIsMatxMatxCFloat14$fFromPtrMatx29Matx61d newMatx61d$fPlacementNewC'Matx30$fIsMatxMatxCDouble14$fFromPtrMatx30Matx66f$fPlacementNewC'Matx31$fIsMatxMatxCFloat15$fFromPtrMatx31Matx66d$fPlacementNewC'Matx32$fIsMatxMatxCDouble15$fFromPtrMatx32DIMMtoRepa$fSourceMdepth $fNFDataArrayFromMatfromMatToMattoMatMatDepth MatChannelsMatShapeemptyMateyeMat matSubRect matCopyTo matConvertTo matFromFunc matCopyToMfoldMatTrackbarCallback MouseCallback EventFlagsRec flagsLButton flagsRButton flagsMButton flagsCtrlKey flagsShiftKey flagsAltKey EventFlagsEventEventMouseMoveEventLButtonDownEventRButtonDownEventMButtonDownEventLButtonUpEventRButtonUpEventMButtonUpEventLButtonDbClickEventRButtonDbClickEventMButtonDbClickEventMouseWheelEventMouseHWheelWindow makeWindow destroyWindow withWindow resizeWindowwaitKey hasLButton hasRButton hasMButton hasCtrlKey hasShiftKey hasAltKey flagsToRecsetMouseCallbackcreateTrackbarimshowimshowM $fShowEvent$fShowEventFlagsRecimdecode imdecodeMimencode imencodeMestimateRigidTransformVideoCaptureSourceVideoFileSourceVideoDeviceSource VideoCapturenewVideoCapturevideoCaptureOpenvideoCaptureReleasevideoCaptureIsOpenedvideoCaptureGrabvideoCaptureRetrievevideoCaptureGetDvideoCaptureGetIvideoCaptureSetDvideoCaptureSetI$fFromPtrVideoCapture$fWithPtrVideoCaptureKeyPoint mkRotatedRectrotatedRectCenterrotatedRectSizerotatedRectAnglerotatedRectBoundingRectrotatedRectPointsmkTermCriteriamkRange wholeRangeDMatch KeyPointReckptPointkptSizekptAngle kptResponse kptOctave kptClassId mkKeyPoint keyPointAsRec$fCSizeOfTYPEC'KeyPoint$fFromPtrKeyPoint$fWithPtrKeyPoint$fPlacementNewC'KeyPoint$fEqKeyPointRec$fShowKeyPointRec AlgorithmalgorithmClearStatealgorithmIsEmpty DMatchRecdmatchQueryIdxdmatchTrainIdx dmatchImgIdxdmatchDistancemkDMatch dmatchAsRec$fCSizeOfTYPEC'DMatch$fFromPtrDMatch$fWithPtrDMatch$fPlacementNewC'DMatch $fEqDMatchRec$fShowDMatchRecFlannBasedMatcherParams indexParams searchParamsFlannSearchParamschecksepssortedFlannIndexParamsFlannKDTreeIndexParamsFlannLshIndexParamstrees tableNumberkeySizemultiProbeLevelFlannBasedMatcher BFMatcherDescriptorMatcherupcastaddtrainmatchmatch'SimpleBlobDetectorParamsblob_minThresholdblob_maxThresholdblob_thresholdStepblob_minRepeatabilityblob_minDistBetweenBlobsblob_filterByAreablob_filterByCircularityblob_filterByColorblob_filterByConvexityblob_filterByInertiaBlobFilterByInertiablob_minInertiaRatioblob_maxInertiaRatioBlobFilterByConvexityblob_minConvexityblob_maxConvexityBlobFilterByColorblob_blobColorBlobFilterByCircularityblob_minCircularityblob_maxCircularityBlobFilterByArea blob_minArea blob_maxAreaSimpleBlobDetector OrbParams orb_nfeaturesorb_scaleFactor orb_nlevelsorb_edgeThresholdorb_firstLevel orb_WTA_K orb_scoreType orb_patchSizeorb_fastThreshold OrbScoreType HarrisScore FastScoreWTA_KWTA_K_2WTA_K_3WTA_K_4OrbdefaultOrbParamsmkOrborbDetectAndComputedefaultSimpleBlobDetectorParamsmkSimpleBlobDetector blobDetect newBFMatchernewFlannBasedMatcher drawMatches$fDefaultDrawMatchesParams$$fDescriptorMatcherFlannBasedMatcher $fDefaultFlannBasedMatcherParams$fDefaultFlannSearchParams$fDefaultFlannIndexParams$fFromPtrFlannBasedMatcher$fWithPtrFlannBasedMatcher$fDescriptorMatcherBFMatcher$fFromPtrBFMatcher$fWithPtrBFMatcher$fWithPtrBaseMatcher$fFromPtrSimpleBlobDetector$fWithPtrSimpleBlobDetector $fFromPtrOrb $fWithPtrOrb$fEqBlobFilterByArea$fEqBlobFilterByCircularity$fEqBlobFilterByColor$fEqBlobFilterByConvexity$fEqBlobFilterByInertiaCascadeClassifiernewCascadeClassifier!cascadeClassifierDetectMultiScale#cascadeClassifierDetectMultiScaleNC$fFromPtrCascadeClassifier$fWithPtrCascadeClassifierColorMapColorMapAutumn ColorMapBone ColorMapJetColorMapWinterColorMapRainbow ColorMapOceanColorMapSummerColorMapSpring ColorMapCool ColorMapHsv ColorMapPink ColorMapHotColorMapParula applyColorMapContourDrawModeOutlineContour FillContours FontSlant NotSlantedSlantedFontFaceFontHersheySimplexFontHersheyPlainFontHersheyDuplexFontHersheyComplexFontHersheyTriplexFontHersheyComplexSmallFontHersheyScriptSimplexFontHersheyScriptComplexFont _fontFace _fontSlant _fontScaleLineType LineType_8 LineType_4 LineType_AA arrowedLinecircleellipsefillConvexPolyfillPoly polylinesline getTextSizeputText rectangle drawContoursmarker$fShowLineType$fEnumLineType$fBoundedLineType$fShowFontFace$fEnumFontFace$fBoundedFontFace$fShowFontSlant $fShowFont LineSegmentlineSegmentStartlineSegmentStop"GoodFeaturesToTrackDetectionMethodHarrisDetectorCornerMinEigenValCircle circleCenter circleRadius CannyNorm CannyNormL1 CannyNormL2cannygoodFeaturesToTrack houghCircles houghLinesP$fIsVecLineSegmentdepth$fShowCannyNorm $fEqCannyNorm $fShowCircle($fShowGoodFeaturesToTrackDetectionMethod&$fEqGoodFeaturesToTrackDetectionMethod$fFoldableLineSegment$fFunctorLineSegment$fTraversableLineSegment$fShowLineSegmentFloodFillOperationFlagsfloodFillConnectivityfloodFillMaskFillColorfloodFillFixedRangefloodFillMaskOnlycvtColor floodFilldefaultFloodFillOperationFlags threshold watershedgrabCutinRangeMatchTemplateNormalisationMatchTemplateNotNormedMatchTemplateNormedMatchTemplateMethodMatchTemplateSqDiffMatchTemplateCCorrMatchTemplateCCoeff matchTemplate$fShowMatchTemplateMethod $fShowMatchTemplateNormalisation$fEqMatchTemplateNormalisationContour contourPointscontourChildrenContourApproximationMethodContourApproximationNoneContourApproximationSimpleContourApproximationTC89L1ContourApproximationTC89KCOSContourRetrievalModeContourRetrievalExternalContourRetrievalListContourRetrievalCCompContourRetrievalTreeContourAreaOrientedContourAreaAbsoluteValue contourAreapointPolygonTest findContours approxPolyDP arcLength minAreaRect $fShowContour BorderModeBorderConstantBorderReplicate BorderReflect BorderWrapBorderReflect101BorderTransparentBorderIsolatedInterpolationMethod InterNearest InterLinear InterCubic InterArea InterLanczos4$fShowInterpolationMethodToHElemstoHElemsHElems HElems_8U HElems_8S HElems_16U HElems_16S HElems_32S HElems_32F HElems_64FHElems_USRTYPE1HMathmShape hmChannelshmElems hElemsDepth hElemsLength matToHMat hMatToMat ResizeAbsRel ResizeAbs ResizeRelresize warpAffinewarpPerspectiveinvertAffineTransformgetPerspectiveTransformgetRotationMatrix2Dremap undistort$fShowResizeAbsRelMorphOperation MorphOpen MorphClose MorphGradient MorphTopHat MorphBlackHat MorphShape MorphRect MorphEllipse MorphCross laplacian medianBlurblur gaussianBlurerodefilter2Ddilate morphologyExgetStructuringElement$fFromJSONHElems$fToJSONHElems $fFromJSONMat $fToJSONMat$fFromJSONSize $fToJSONSize$fFromJSONSize0 $fToJSONSize0$fFromJSONPoint $fToJSONPoint$fFromJSONPoint0$fToJSONPoint0$fFromJSONPoint1$fToJSONPoint1$fFromJSONPoint2$fToJSONPoint2$fFromJSONPoint3$fToJSONPoint3$fFromJSONPoint4$fToJSONPoint4 $fFromJSONJ $fFromJSONJ0 $fToJSONJ $fToJSONJ0$fFromJSONHMat $fToJSONHMatBackgroundSubtractorMOG2BackgroundSubtractorKNNBackgroundSubtractor bgSubApplygetBackgroundImagenewBackgroundSubtractorKNNnewBackgroundSubtractorMOG2.$fBackgroundSubtractorBackgroundSubtractorMOG2-$fBackgroundSubtractorBackgroundSubtractorKNN#$fAlgorithmBackgroundSubtractorMOG2"$fAlgorithmBackgroundSubtractorKNN!$fFromPtrBackgroundSubtractorMOG2 $fFromPtrBackgroundSubtractorKNN!$fWithPtrBackgroundSubtractorMOG2 $fWithPtrBackgroundSubtractorKNN VideoFileSink vfsFilePath vfsFourCCvfsFps vfsFrameDimsVideoWriterSinkVideoFileSink' VideoWritervideoWriterOpenvideoWriterReleasevideoWriterIsOpenedvideoWriterWrite$fFromPtrVideoWriter$fWithPtrVideoWriter FlipDirectionFlipVerticallyFlipHorizontallyFlipBoth matScalarAdd matScalarMultmatAbs matAbsDiffmatAdd matSubtractmatAddWeighted matScaleAddmatMaxmatScalarCompare bitwiseNot bitwiseAnd bitwiseOr bitwiseXormatMergematSplitmatChannelMapM minMaxLocnormnormDiff normalizematSummatSumM meanStdDevmatFlip matTranspose$fShowFlipDirection$fEqFlipDirection WhichImageImage1Image2FundamentalMatMethod FM_7Point FM_8Point FM_RansacFM_LmedsfindFundamentalMatcomputeCorrespondEpilines$fShowFundamentalMatMethod$fEqFundamentalMatMethod$fShowWhichImage$fEqWhichImageFilterMat2D PixelChannels PixelDepth fromImagetoImageisoJuicy$fStorablePixelYA16$fStorablePixelYA8$fStorablePixelRGBA16$fStorablePixelRGBA8$fStorablePixelRGBF$fStorablePixelRGB16$fStorablePixelRGB8baseGHC.BaseMaybePrivateIsStaticc'CAP_PROP_POS_MSECc'CAP_PROP_POS_FRAMESc'CAP_PROP_POS_AVI_RATIOc'CAP_PROP_FRAME_WIDTHc'CAP_PROP_FRAME_HEIGHTc'CAP_PROP_FPSc'CAP_PROP_FOURCCc'CAP_PROP_FRAME_COUNTc'CAP_PROP_FORMATc'CAP_PROP_MODEc'CAP_PROP_BRIGHTNESSc'CAP_PROP_CONTRASTc'CAP_PROP_SATURATIONc'CAP_PROP_HUEc'CAP_PROP_GAINc'CAP_PROP_EXPOSUREc'CAP_PROP_CONVERT_RGBc'CAP_PROP_WHITE_BALANCE_BLUE_Uc'CAP_PROP_RECTIFICATIONc'CAP_PROP_MONOCHROMEc'CAP_PROP_SHARPNESSc'CAP_PROP_AUTO_EXPOSUREc'CAP_PROP_GAMMAc'CAP_PROP_TEMPERATUREc'CAP_PROP_TRIGGERc'CAP_PROP_TRIGGER_DELAYc'CAP_PROP_WHITE_BALANCE_RED_Vc'CAP_PROP_ZOOMc'CAP_PROP_FOCUSc'CAP_PROP_GUIDc'CAP_PROP_ISO_SPEEDc'CAP_PROP_BACKLIGHTc'CAP_PROP_PANc'CAP_PROP_TILTc'CAP_PROP_ROLLc'CAP_PROP_IRISc'CAP_PROP_SETTINGSc'CAP_PROP_BUFFERSIZEc'CAP_PROP_AUTOFOCUS c'CAP_ANY c'CAP_VFW c'CAP_V4L c'CAP_V4L2c'CAP_FIREWIREc'CAP_FIREWAREc'CAP_IEEE1394 c'CAP_DC1394 c'CAP_CMU1394c'CAP_QT c'CAP_UNICAP c'CAP_DSHOW c'CAP_PVAPI c'CAP_OPENNIc'CAP_OPENNI_ASUS c'CAP_ANDROID c'CAP_XIAPIc'CAP_AVFOUNDATIONc'CAP_GIGANETIX c'CAP_MSMF c'CAP_WINRTc'CAP_INTELPERC c'CAP_OPENNI2c'CAP_OPENNI2_ASUS c'CAP_GPHOTO2c'CAP_GSTREAMER c'CAP_FFMPEG c'CAP_IMAGESmarshalCapturePropertiesmarshalVideoCaptureAPI $fShowFourCC$fIsStringFourCC c'INPAINT_NSc'INPAINT_TELEAc'NORMAL_CLONE c'MIXED_CLONEc'MONOCHROME_TRANSFERc'RECURS_FILTERc'NORMCONV_FILTERunMutPlusTwoWidthAndHeightPlusTwoc'IMREAD_UNCHANGEDc'IMREAD_GRAYSCALEc'IMREAD_COLORc'IMREAD_ANYDEPTHc'IMREAD_ANYCOLORc'IMREAD_LOAD_GDALmarshalImreadModec'IMWRITE_JPEG_QUALITYc'IMWRITE_JPEG_PROGRESSIVEc'IMWRITE_JPEG_OPTIMIZEc'IMWRITE_JPEG_RST_INTERVALc'IMWRITE_JPEG_LUMA_QUALITYc'IMWRITE_JPEG_CHROMA_QUALITYmarshalJpegParamsc'IMWRITE_PNG_STRATEGY_DEFAULTc'IMWRITE_PNG_STRATEGY_FILTERED#c'IMWRITE_PNG_STRATEGY_HUFFMAN_ONLYc'IMWRITE_PNG_STRATEGY_RLEc'IMWRITE_PNG_STRATEGY_FIXEDmarshalPngStrategyc'IMWRITE_PNG_COMPRESSIONc'IMWRITE_PNG_STRATEGYc'IMWRITE_PNG_BILEVELmarshalPngParamsc'IMWRITE_PXM_BINARYc'IMWRITE_WEBP_QUALITYmarshalOutputFormat StaticDepthT$fToDepthDSproxy$fToDepthDSproxy0$fToDepthDSproxy1$fToDepthDSproxy2$fToDepthDSproxy3$fToDepthDSproxy4$fToDepthDSproxy5$fToDepthDSproxy6$fToDepthDSDepth$fToDepthproxy$fToDepthproxy0$fToDepthproxy1$fToDepthproxy2$fToDepthproxy3$fToDepthproxy4$fToDepthproxy5$fToDepthDepthc'CV_8Uc'CV_8Sc'CV_16Uc'CV_16Sc'CV_32Sc'CV_32Fc'CV_64F c'CV_USRTYPE1 marshalDepthunmarshalDepth c'CV_CN_SHIFT marshalFlags c'CV_CN_MAXc'CV_MAT_DEPTH_MASKunmarshalFlagsc'sizeof_Point2ic'sizeof_Point2fc'sizeof_Point2dc'sizeof_Point3ic'sizeof_Point3fc'sizeof_Point3dc'sizeof_Size2ic'sizeof_Size2fc'sizeof_Scalarc'sizeof_Rangec'sizeof_KeyPointc'sizeof_DMatch c'sizeof_Matc'TERMCRITERIA_COUNTc'TERMCRITERIA_EPSc'CMP_EQc'CMP_GTc'CMP_GEc'CMP_LTc'CMP_LEc'CMP_NEmarshalCmpType c'NORM_INF c'NORM_L1 c'NORM_L2 c'NORM_L2SQRc'NORM_HAMMINGc'NORM_HAMMING2 c'NORM_MINMAXc'NORM_RELATIVEmarshalNormTypec'CV_FM_7POINTc'CV_FM_8POINTc'CV_FM_RANSAC c'CV_FM_LMEDSwithPtrCcSizeOfC'TrackbarCallbackC'MouseCallbackC'CascadeClassifier C'VideoWriterC'VideoCaptureC'Ptr_BackgroundSubtractorMOG2C'Ptr_BackgroundSubtractorKNNC'FlannBasedMatcher C'BFMatcherC'DescriptorMatcherC'Ptr_SimpleBlobDetector C'Ptr_ORBC'DMatch C'KeyPointC'MatC'ScalarC'RangeC'TermCriteria C'RotatedRectC'CvCppException $fWithPtrMut$fWithPtrMaybeNothingGHC.PtrnullPtrfromPtrC'Rect2dC'Rect2fC'Rect2iC'Size2dC'Size2fC'Size2i C'Point3d C'Point3f C'Point3i C'Point2d C'Point2f C'Point2iC'Vec4dC'Vec4fC'Vec4iC'Vec3dC'Vec3fC'Vec3iC'Vec2dC'Vec2fC'Vec2i C'Matx66d C'Matx66f C'Matx61d C'Matx61f C'Matx51d C'Matx51f C'Matx44d C'Matx44f C'Matx43d C'Matx43f C'Matx41d C'Matx41f C'Matx34d C'Matx34f C'Matx33d C'Matx33f C'Matx32d C'Matx32f C'Matx31d C'Matx31f C'Matx23d C'Matx23f C'Matx22d C'Matx22f C'Matx21d C'Matx21f C'Matx16d C'Matx16f C'Matx14d C'Matx14f C'Matx13d C'Matx13f C'Matx12d C'Matx12fC'RectC'SizeC'PointC'VecC'MatxtoCFloat fromCFloat toCDouble fromCDouble$fCSizeOfTYPEC'Mat$fCSizeOfTYPEC'Range$fCSizeOfTYPEC'Scalar$fCSizeOfTYPEC'Size$fCSizeOfTYPEC'Size0$fCSizeOfTYPEC'Point$fCSizeOfTYPEC'Point0$fCSizeOfTYPEC'Point1$fCSizeOfTYPEC'Point2$fCSizeOfTYPEC'Point3$fCSizeOfTYPEC'Point4unMatx $fWithPtrMatxunPoint $fShowPoint $fShowPoint0$fWithPtrPointunSize $fShowSize $fWithPtrSizeunVec $fShowVec $fShowVec0 $fShowVec1 $fWithPtrVec placementNewplacementDeletemkPlacementNewInstance openCvCtx+inline-c-cpp-0.1.0.0-KmF3A1JlRWq5pQWuaDSDs7Language.C.Inline.CppcppCtx'inline-c-0.5.6.1-JmHBrZYpnLIEXGiFhG6adYLanguage.C.Inline.ContextbsCtxvecCtx ctxTypesTablectxAntiQuotersopenCvTypesTable objFromPtr mkMatxType mkPointType mkSizeType mkVecType inline_c_ffi_6989586621679273018 inline_c_ffi_6989586621679273028unCvCppExceptionhandleCvExceptioncvExcept cvExceptU cvExceptWrap runCvExceptSTunsafeCvExceptunsafeWrapException$fShowCvCppException$fFromPtrCvCppException$fWithPtrCvCppException$fExceptionCvException inline_c_ffi_6989586621679288810 inline_c_ffi_6989586621679288818 inline_c_ffi_6989586621679288830 inline_c_ffi_6989586621679288838 inline_c_ffi_6989586621679288843 inline_c_ffi_6989586621679289126 inline_c_ffi_6989586621679289134 inline_c_ffi_6989586621679289146 inline_c_ffi_6989586621679289154 inline_c_ffi_6989586621679289159 inline_c_ffi_6989586621679289437 inline_c_ffi_6989586621679289445 inline_c_ffi_6989586621679289457 inline_c_ffi_6989586621679289465 inline_c_ffi_6989586621679289470 inline_c_ffi_6989586621679289748 inline_c_ffi_6989586621679289759 inline_c_ffi_6989586621679289774 inline_c_ffi_6989586621679289782 inline_c_ffi_6989586621679289787 inline_c_ffi_6989586621679290106 inline_c_ffi_6989586621679290117 inline_c_ffi_6989586621679290132 inline_c_ffi_6989586621679290140 inline_c_ffi_6989586621679290145 inline_c_ffi_6989586621679290464 inline_c_ffi_6989586621679290475 inline_c_ffi_6989586621679290490 inline_c_ffi_6989586621679290498 inline_c_ffi_6989586621679290503 inline_c_ffi_6989586621679290822 inline_c_ffi_6989586621679290836 inline_c_ffi_6989586621679290854 inline_c_ffi_6989586621679290862 inline_c_ffi_6989586621679290867 inline_c_ffi_6989586621679291227 inline_c_ffi_6989586621679291241 inline_c_ffi_6989586621679291259 inline_c_ffi_6989586621679291267 inline_c_ffi_6989586621679291272 inline_c_ffi_6989586621679291632 inline_c_ffi_6989586621679291646 inline_c_ffi_6989586621679291664 inline_c_ffi_6989586621679291672 inline_c_ffi_6989586621679291677 inline_c_ffi_6989586621679314606 inline_c_ffi_6989586621679314614 inline_c_ffi_6989586621679314626 inline_c_ffi_6989586621679314634 inline_c_ffi_6989586621679314639 inline_c_ffi_6989586621679314908 inline_c_ffi_6989586621679314916 inline_c_ffi_6989586621679314928 inline_c_ffi_6989586621679314936 inline_c_ffi_6989586621679314941 inline_c_ffi_6989586621679315210 inline_c_ffi_6989586621679315218 inline_c_ffi_6989586621679315230 inline_c_ffi_6989586621679315238 inline_c_ffi_6989586621679315243 inline_c_ffi_6989586621679322287 inline_c_ffi_6989586621679322295 inline_c_ffi_6989586621679322307 inline_c_ffi_6989586621679322315 inline_c_ffi_6989586621679322320 inline_c_ffi_6989586621679322599 inline_c_ffi_6989586621679322607 inline_c_ffi_6989586621679322619 inline_c_ffi_6989586621679322627 inline_c_ffi_6989586621679322632 inline_c_ffi_6989586621679322911 inline_c_ffi_6989586621679322919 inline_c_ffi_6989586621679322931 inline_c_ffi_6989586621679322939 inline_c_ffi_6989586621679322944 inline_c_ffi_6989586621679323223 inline_c_ffi_6989586621679323234 inline_c_ffi_6989586621679323249 inline_c_ffi_6989586621679323257 inline_c_ffi_6989586621679323262 inline_c_ffi_6989586621679323582 inline_c_ffi_6989586621679323593 inline_c_ffi_6989586621679323608 inline_c_ffi_6989586621679323616 inline_c_ffi_6989586621679323621 inline_c_ffi_6989586621679323941 inline_c_ffi_6989586621679323952 inline_c_ffi_6989586621679323967 inline_c_ffi_6989586621679323975 inline_c_ffi_6989586621679323980 newWholeRange withArrayPtrForeign.ForeignPtr.ImpwithForeignPtr inline_c_ffi_6989586621679338533 inline_c_ffi_6989586621679338550 inline_c_ffi_6989586621679338588 inline_c_ffi_6989586621679338600 inline_c_ffi_6989586621679338606 inline_c_ffi_6989586621679338653 inline_c_ffi_6989586621679339687 inline_c_ffi_6989586621679339692 inline_c_ffi_6989586621679339701 inline_c_ffi_6989586621679339710 inline_c_ffi_6989586621679339719 inline_c_ffi_6989586621679339728unRangeunTermCriteria unRotatedRectunScalar newScalarnewRotatedRectnewTermCriterianewRange withPolygons$fWithPtrRange$fWithPtrTermCriteria$fWithPtrRotatedRect$fWithPtrScalar$fFromScalarV4$fFromScalarV40$fFromScalarScalar $fToScalarV4 $fToScalarV40$fToScalarScalar$fFromPtrRange$fFromPtrTermCriteria$fFromPtrRotatedRect$fFromPtrScalar$fPlacementNewC'ScalarkeepMatAliveDuringGHC.ForeignPtr ForeignPtrPtr dimPositions $fToShape::: $fToShapeZ%vector-0.11.0.0-LMwQhhnXj8U3T5Bm1JFxG Data.VectorVector$fToShapeProxy$fToShapeProxy0 $fToShape[]$fToShapeVector inline_c_ffi_6989586621679358246 inline_c_ffi_6989586621679358275 inline_c_ffi_6989586621679358301 inline_c_ffi_6989586621679358323 inline_c_ffi_6989586621679358354 inline_c_ffi_6989586621679358386 inline_c_ffi_6989586621679361133 inline_c_ffi_6989586621679361138unMat newEmptyMatnewMat withVector withMatDatamatElemAddress cloneMatIO$fToShapeDSProxy$fToShapeDSproxy$fFreezeThawMat $fFromPtrMat $fWithPtrMat$fPlacementNewC'Mat inline_c_ffi_6989586621679385866 inline_c_ffi_6989586621679385894 inline_c_ffi_6989586621679385933 inline_c_ffi_6989586621679385959 inline_c_ffi_6989586621679385977marshalInpaintingMethodunRect$fFromJSONRect $fToJSONRect$fFromJSONHRect $fToJSONHRect $fShowRect $fWithPtrRect mkRectType inline_c_ffi_6989586621679441507 inline_c_ffi_6989586621679441513 inline_c_ffi_6989586621679441519 inline_c_ffi_6989586621679441525 inline_c_ffi_6989586621679441531 inline_c_ffi_6989586621679441541 inline_c_ffi_6989586621679441557 inline_c_ffi_6989586621679441577 inline_c_ffi_6989586621679441582 inline_c_ffi_6989586621679442089 inline_c_ffi_6989586621679442095 inline_c_ffi_6989586621679442101 inline_c_ffi_6989586621679442107 inline_c_ffi_6989586621679442113 inline_c_ffi_6989586621679442123 inline_c_ffi_6989586621679442139 inline_c_ffi_6989586621679442159 inline_c_ffi_6989586621679442164 inline_c_ffi_6989586621679442655 inline_c_ffi_6989586621679442661 inline_c_ffi_6989586621679442667 inline_c_ffi_6989586621679442673 inline_c_ffi_6989586621679442679 inline_c_ffi_6989586621679442689 inline_c_ffi_6989586621679442705 inline_c_ffi_6989586621679442725 inline_c_ffi_6989586621679442730c'THRESH_BINARYc'THRESH_BINARY_INVc'THRESH_TRUNCc'THRESH_TOZEROc'THRESH_TOZERO_INVmarshalThreshType c'THRESH_OTSUc'THRESH_TRIANGLEmarshalThreshValuec'FLOODFILL_FIXED_RANGEc'FLOODFILL_MASK_ONLYc'GC_INIT_WITH_RECTc'GC_INIT_WITH_MASK c'GC_EVALmarshalGrabCutOperationModemarshalGrabCutOperationModeRectc'COLOR_BGR2BGRAc'COLOR_RGB2RGBAc'COLOR_BGRA2BGRc'COLOR_RGBA2RGBc'COLOR_BGR2RGBAc'COLOR_RGB2BGRAc'COLOR_RGBA2BGRc'COLOR_BGRA2RGBc'COLOR_BGR2RGBc'COLOR_RGB2BGRc'COLOR_BGRA2RGBAc'COLOR_RGBA2BGRAc'COLOR_BGR2GRAYc'COLOR_RGB2GRAYc'COLOR_GRAY2BGRc'COLOR_GRAY2RGBc'COLOR_GRAY2BGRAc'COLOR_GRAY2RGBAc'COLOR_BGRA2GRAYc'COLOR_RGBA2GRAYc'COLOR_BGR2BGR565c'COLOR_RGB2BGR565c'COLOR_BGR5652BGRc'COLOR_BGR5652RGBc'COLOR_BGRA2BGR565c'COLOR_RGBA2BGR565c'COLOR_BGR5652BGRAc'COLOR_BGR5652RGBAc'COLOR_GRAY2BGR565c'COLOR_BGR5652GRAYc'COLOR_BGR2BGR555c'COLOR_RGB2BGR555c'COLOR_BGR5552BGRc'COLOR_BGR5552RGBc'COLOR_BGRA2BGR555c'COLOR_RGBA2BGR555c'COLOR_BGR5552BGRAc'COLOR_BGR5552RGBAc'COLOR_GRAY2BGR555c'COLOR_BGR5552GRAYc'COLOR_BGR2XYZc'COLOR_RGB2XYZc'COLOR_XYZ2BGRc'COLOR_XYZ2RGBc'COLOR_BGR2YCrCbc'COLOR_RGB2YCrCbc'COLOR_YCrCb2BGRc'COLOR_YCrCb2RGBc'COLOR_BGR2HSVc'COLOR_RGB2HSVc'COLOR_BGR2Labc'COLOR_RGB2Labc'COLOR_BGR2Luvc'COLOR_RGB2Luvc'COLOR_BGR2HLSc'COLOR_RGB2HLSc'COLOR_HSV2BGRc'COLOR_HSV2RGBc'COLOR_Lab2BGRc'COLOR_Lab2RGBc'COLOR_Luv2BGRc'COLOR_Luv2RGBc'COLOR_HLS2BGRc'COLOR_HLS2RGBc'COLOR_BGR2HSV_FULLc'COLOR_RGB2HSV_FULLc'COLOR_BGR2HLS_FULLc'COLOR_RGB2HLS_FULLc'COLOR_HSV2BGR_FULLc'COLOR_HSV2RGB_FULLc'COLOR_HLS2BGR_FULLc'COLOR_HLS2RGB_FULLc'COLOR_LBGR2Labc'COLOR_LRGB2Labc'COLOR_LBGR2Luvc'COLOR_LRGB2Luvc'COLOR_Lab2LBGRc'COLOR_Lab2LRGBc'COLOR_Luv2LBGRc'COLOR_Luv2LRGBc'COLOR_BGR2YUVc'COLOR_RGB2YUVc'COLOR_YUV2BGRc'COLOR_YUV2RGBc'COLOR_YUV2RGB_NV12c'COLOR_YUV2BGR_NV12c'COLOR_YUV2RGB_NV21c'COLOR_YUV2BGR_NV21c'COLOR_YUV420sp2RGBc'COLOR_YUV420sp2BGRc'COLOR_YUV2RGBA_NV12c'COLOR_YUV2BGRA_NV12c'COLOR_YUV2RGBA_NV21c'COLOR_YUV2BGRA_NV21c'COLOR_YUV420sp2RGBAc'COLOR_YUV420sp2BGRAc'COLOR_YUV2RGB_YV12c'COLOR_YUV2BGR_YV12c'COLOR_YUV2RGB_IYUVc'COLOR_YUV2BGR_IYUVc'COLOR_YUV2RGB_I420c'COLOR_YUV2BGR_I420c'COLOR_YUV420p2RGBc'COLOR_YUV420p2BGRc'COLOR_YUV2RGBA_YV12c'COLOR_YUV2BGRA_YV12c'COLOR_YUV2RGBA_IYUVc'COLOR_YUV2BGRA_IYUVc'COLOR_YUV2RGBA_I420c'COLOR_YUV2BGRA_I420c'COLOR_YUV420p2RGBAc'COLOR_YUV420p2BGRAc'COLOR_YUV2GRAY_420c'COLOR_YUV2GRAY_NV21c'COLOR_YUV2GRAY_NV12c'COLOR_YUV2GRAY_YV12c'COLOR_YUV2GRAY_IYUVc'COLOR_YUV2GRAY_I420c'COLOR_YUV420sp2GRAYc'COLOR_YUV420p2GRAYc'COLOR_YUV2RGB_UYVYc'COLOR_YUV2BGR_UYVYc'COLOR_YUV2RGB_Y422c'COLOR_YUV2BGR_Y422c'COLOR_YUV2RGB_UYNVc'COLOR_YUV2BGR_UYNVc'COLOR_YUV2RGBA_UYVYc'COLOR_YUV2BGRA_UYVYc'COLOR_YUV2RGBA_Y422c'COLOR_YUV2BGRA_Y422c'COLOR_YUV2RGBA_UYNVc'COLOR_YUV2BGRA_UYNVc'COLOR_YUV2RGB_YUY2c'COLOR_YUV2BGR_YUY2c'COLOR_YUV2RGB_YVYUc'COLOR_YUV2BGR_YVYUc'COLOR_YUV2RGB_YUYVc'COLOR_YUV2BGR_YUYVc'COLOR_YUV2RGB_YUNVc'COLOR_YUV2BGR_YUNVc'COLOR_YUV2RGBA_YUY2c'COLOR_YUV2BGRA_YUY2c'COLOR_YUV2RGBA_YVYUc'COLOR_YUV2BGRA_YVYUc'COLOR_YUV2RGBA_YUYVc'COLOR_YUV2BGRA_YUYVc'COLOR_YUV2RGBA_YUNVc'COLOR_YUV2BGRA_YUNVc'COLOR_YUV2GRAY_UYVYc'COLOR_YUV2GRAY_YUY2c'COLOR_YUV2GRAY_Y422c'COLOR_YUV2GRAY_UYNVc'COLOR_YUV2GRAY_YVYUc'COLOR_YUV2GRAY_YUYVc'COLOR_YUV2GRAY_YUNVc'COLOR_RGBA2mRGBAc'COLOR_mRGBA2RGBAc'COLOR_RGB2YUV_I420c'COLOR_BGR2YUV_I420c'COLOR_RGB2YUV_IYUVc'COLOR_BGR2YUV_IYUVc'COLOR_RGBA2YUV_I420c'COLOR_BGRA2YUV_I420c'COLOR_RGBA2YUV_IYUVc'COLOR_BGRA2YUV_IYUVc'COLOR_RGB2YUV_YV12c'COLOR_BGR2YUV_YV12c'COLOR_RGBA2YUV_YV12c'COLOR_BGRA2YUV_YV12c'COLOR_BayerBG2BGRc'COLOR_BayerGB2BGRc'COLOR_BayerRG2BGRc'COLOR_BayerGR2BGRc'COLOR_BayerBG2RGBc'COLOR_BayerGB2RGBc'COLOR_BayerRG2RGBc'COLOR_BayerGR2RGBc'COLOR_BayerBG2GRAYc'COLOR_BayerGB2GRAYc'COLOR_BayerRG2GRAYc'COLOR_BayerGR2GRAYc'COLOR_BayerBG2BGR_VNGc'COLOR_BayerGB2BGR_VNGc'COLOR_BayerRG2BGR_VNGc'COLOR_BayerGR2BGR_VNGc'COLOR_BayerBG2RGB_VNGc'COLOR_BayerGB2RGB_VNGc'COLOR_BayerRG2RGB_VNGc'COLOR_BayerGR2RGB_VNGc'COLOR_BayerBG2BGR_EAc'COLOR_BayerGB2BGR_EAc'COLOR_BayerRG2BGR_EAc'COLOR_BayerGR2BGR_EAc'COLOR_BayerBG2RGB_EAc'COLOR_BayerGB2RGB_EAc'COLOR_BayerRG2RGB_EAc'COLOR_BayerGR2RGB_EAcolorConversionCode$fColorCodeMatchesChannelscodeS$fColorCodeMatchesChannelscodeD$fColorConversionBayerGRRGB_EA$fColorConversionBayerRGRGB_EA$fColorConversionBayerGBRGB_EA$fColorConversionBayerBGRGB_EA$fColorConversionBayerGRBGR_EA$fColorConversionBayerRGBGR_EA$fColorConversionBayerGBBGR_EA$fColorConversionBayerBGBGR_EA$fColorConversionBayerGRRGB_VNG$fColorConversionBayerRGRGB_VNG$fColorConversionBayerGBRGB_VNG$fColorConversionBayerBGRGB_VNG$fColorConversionBayerGRBGR_VNG$fColorConversionBayerRGBGR_VNG$fColorConversionBayerGBBGR_VNG$fColorConversionBayerBGBGR_VNG$fColorConversionBayerGRGRAY$fColorConversionBayerRGGRAY$fColorConversionBayerGBGRAY$fColorConversionBayerBGGRAY$fColorConversionBayerGRRGB$fColorConversionBayerRGRGB$fColorConversionBayerGBRGB$fColorConversionBayerBGRGB$fColorConversionBayerGRBGR$fColorConversionBayerRGBGR$fColorConversionBayerGBBGR$fColorConversionBayerBGBGR$fColorConversionBGRAYUV_YV12$fColorConversionRGBAYUV_YV12$fColorConversionBGRYUV_YV12$fColorConversionRGBYUV_YV12$fColorConversionBGRAYUV_IYUV$fColorConversionRGBAYUV_IYUV$fColorConversionBGRAYUV_I420$fColorConversionRGBAYUV_I420$fColorConversionBGRYUV_IYUV$fColorConversionRGBYUV_IYUV$fColorConversionBGRYUV_I420$fColorConversionRGBYUV_I420$fColorConversionMRGBARGBA$fColorConversionRGBAMRGBA$fColorConversionYUVGRAY_YUNV$fColorConversionYUVGRAY_YUYV$fColorConversionYUVGRAY_YVYU$fColorConversionYUVGRAY_UYNV$fColorConversionYUVGRAY_Y422$fColorConversionYUVGRAY_YUY2$fColorConversionYUVGRAY_UYVY$fColorConversionYUVBGRA_YUNV$fColorConversionYUVRGBA_YUNV$fColorConversionYUVBGRA_YUYV$fColorConversionYUVRGBA_YUYV$fColorConversionYUVBGRA_YVYU$fColorConversionYUVRGBA_YVYU$fColorConversionYUVBGRA_YUY2$fColorConversionYUVRGBA_YUY2$fColorConversionYUVBGR_YUNV$fColorConversionYUVRGB_YUNV$fColorConversionYUVBGR_YUYV$fColorConversionYUVRGB_YUYV$fColorConversionYUVBGR_YVYU$fColorConversionYUVRGB_YVYU$fColorConversionYUVBGR_YUY2$fColorConversionYUVRGB_YUY2$fColorConversionYUVBGRA_UYNV$fColorConversionYUVRGBA_UYNV$fColorConversionYUVBGRA_Y422$fColorConversionYUVRGBA_Y422$fColorConversionYUVBGRA_UYVY$fColorConversionYUVRGBA_UYVY$fColorConversionYUVBGR_UYNV$fColorConversionYUVRGB_UYNV$fColorConversionYUVBGR_Y422$fColorConversionYUVRGB_Y422$fColorConversionYUVBGR_UYVY$fColorConversionYUVRGB_UYVY$fColorConversionYUV420pGRAY$fColorConversionYUV420spGRAY$fColorConversionYUVGRAY_I420$fColorConversionYUVGRAY_IYUV$fColorConversionYUVGRAY_YV12$fColorConversionYUVGRAY_NV12$fColorConversionYUVGRAY_NV21$fColorConversionYUVGRAY_420$fColorConversionYUV420pBGRA$fColorConversionYUV420pRGBA$fColorConversionYUVBGRA_I420$fColorConversionYUVRGBA_I420$fColorConversionYUVBGRA_IYUV$fColorConversionYUVRGBA_IYUV$fColorConversionYUVBGRA_YV12$fColorConversionYUVRGBA_YV12$fColorConversionYUV420pBGR$fColorConversionYUV420pRGB$fColorConversionYUVBGR_I420$fColorConversionYUVRGB_I420$fColorConversionYUVBGR_IYUV$fColorConversionYUVRGB_IYUV$fColorConversionYUVBGR_YV12$fColorConversionYUVRGB_YV12$fColorConversionYUV420spBGRA$fColorConversionYUV420spRGBA$fColorConversionYUVBGRA_NV21$fColorConversionYUVRGBA_NV21$fColorConversionYUVBGRA_NV12$fColorConversionYUVRGBA_NV12$fColorConversionYUV420spBGR$fColorConversionYUV420spRGB$fColorConversionYUVBGR_NV21$fColorConversionYUVRGB_NV21$fColorConversionYUVBGR_NV12$fColorConversionYUVRGB_NV12$fColorConversionYUVRGB$fColorConversionYUVBGR$fColorConversionRGBYUV$fColorConversionBGRYUV$fColorConversionLuvLRGB$fColorConversionLuvLBGR$fColorConversionLabLRGB$fColorConversionLabLBGR$fColorConversionLRGBLuv$fColorConversionLBGRLuv$fColorConversionLRGBLab$fColorConversionLBGRLab$fColorConversionHLSRGB_FULL$fColorConversionHLSBGR_FULL$fColorConversionHSVRGB_FULL$fColorConversionHSVBGR_FULL$fColorConversionRGBHLS_FULL$fColorConversionBGRHLS_FULL$fColorConversionRGBHSV_FULL$fColorConversionBGRHSV_FULL$fColorConversionHLSRGB$fColorConversionHLSBGR$fColorConversionLuvRGB$fColorConversionLuvBGR$fColorConversionLabRGB$fColorConversionLabBGR$fColorConversionHSVRGB$fColorConversionHSVBGR$fColorConversionRGBHLS$fColorConversionBGRHLS$fColorConversionRGBLuv$fColorConversionBGRLuv$fColorConversionRGBLab$fColorConversionBGRLab$fColorConversionRGBHSV$fColorConversionBGRHSV$fColorConversionYCrCbRGB$fColorConversionYCrCbBGR$fColorConversionRGBYCrCb$fColorConversionBGRYCrCb$fColorConversionXYZRGB$fColorConversionXYZBGR$fColorConversionRGBXYZ$fColorConversionBGRXYZ$fColorConversionBGR555GRAY$fColorConversionGRAYBGR555$fColorConversionBGR555RGBA$fColorConversionBGR555BGRA$fColorConversionRGBABGR555$fColorConversionBGRABGR555$fColorConversionBGR555RGB$fColorConversionBGR555BGR$fColorConversionRGBBGR555$fColorConversionBGRBGR555$fColorConversionBGR565GRAY$fColorConversionGRAYBGR565$fColorConversionBGR565RGBA$fColorConversionBGR565BGRA$fColorConversionRGBABGR565$fColorConversionBGRABGR565$fColorConversionBGR565RGB$fColorConversionBGR565BGR$fColorConversionRGBBGR565$fColorConversionBGRBGR565$fColorConversionRGBAGRAY$fColorConversionBGRAGRAY$fColorConversionGRAYRGBA$fColorConversionGRAYBGRA$fColorConversionGRAYRGB$fColorConversionGRAYBGR$fColorConversionRGBGRAY$fColorConversionBGRGRAY$fColorConversionRGBABGRA$fColorConversionBGRARGBA$fColorConversionRGBBGR$fColorConversionBGRRGB$fColorConversionBGRARGB$fColorConversionRGBABGR$fColorConversionRGBBGRA$fColorConversionBGRRGBA$fColorConversionRGBARGB$fColorConversionBGRABGR$fColorConversionRGBRGBA$fColorConversionBGRBGRA inline_c_ffi_6989586621679500086 inline_c_ffi_6989586621679500094 inline_c_ffi_6989586621679500102 inline_c_ffi_6989586621679500107 inline_c_ffi_6989586621679500290 inline_c_ffi_6989586621679500298 inline_c_ffi_6989586621679500306 inline_c_ffi_6989586621679500311 inline_c_ffi_6989586621679500494 inline_c_ffi_6989586621679500505 inline_c_ffi_6989586621679500513 inline_c_ffi_6989586621679500518 inline_c_ffi_6989586621679500719 inline_c_ffi_6989586621679500730 inline_c_ffi_6989586621679500738 inline_c_ffi_6989586621679500743 inline_c_ffi_6989586621679500944 inline_c_ffi_6989586621679500958 inline_c_ffi_6989586621679500966 inline_c_ffi_6989586621679500971 inline_c_ffi_6989586621679501190 inline_c_ffi_6989586621679501204 inline_c_ffi_6989586621679501212 inline_c_ffi_6989586621679501217 inline_c_ffi_6989586621679501436 inline_c_ffi_6989586621679501456 inline_c_ffi_6989586621679501464 inline_c_ffi_6989586621679501469 inline_c_ffi_6989586621679501724 inline_c_ffi_6989586621679501744 inline_c_ffi_6989586621679501752 inline_c_ffi_6989586621679501757 inline_c_ffi_6989586621679502012 inline_c_ffi_6989586621679502020 inline_c_ffi_6989586621679502028 inline_c_ffi_6989586621679502033 inline_c_ffi_6989586621679502216 inline_c_ffi_6989586621679502224 inline_c_ffi_6989586621679502232 inline_c_ffi_6989586621679502237 inline_c_ffi_6989586621679502420 inline_c_ffi_6989586621679502434 inline_c_ffi_6989586621679502442 inline_c_ffi_6989586621679502447 inline_c_ffi_6989586621679502666 inline_c_ffi_6989586621679502680 inline_c_ffi_6989586621679502688 inline_c_ffi_6989586621679502693 inline_c_ffi_6989586621679502912 inline_c_ffi_6989586621679502932 inline_c_ffi_6989586621679502940 inline_c_ffi_6989586621679502945 inline_c_ffi_6989586621679503200 inline_c_ffi_6989586621679503220 inline_c_ffi_6989586621679503228 inline_c_ffi_6989586621679503233 inline_c_ffi_6989586621679503488 inline_c_ffi_6989586621679503499 inline_c_ffi_6989586621679503507 inline_c_ffi_6989586621679503512 inline_c_ffi_6989586621679503713 inline_c_ffi_6989586621679503724 inline_c_ffi_6989586621679503732 inline_c_ffi_6989586621679503737 inline_c_ffi_6989586621679503938 inline_c_ffi_6989586621679503958 inline_c_ffi_6989586621679503966 inline_c_ffi_6989586621679503971 inline_c_ffi_6989586621679504226 inline_c_ffi_6989586621679504246 inline_c_ffi_6989586621679504254 inline_c_ffi_6989586621679504259 inline_c_ffi_6989586621679504514 inline_c_ffi_6989586621679504543 inline_c_ffi_6989586621679504551 inline_c_ffi_6989586621679504556 inline_c_ffi_6989586621679504865 inline_c_ffi_6989586621679504894 inline_c_ffi_6989586621679504902 inline_c_ffi_6989586621679504907 inline_c_ffi_6989586621679505216 inline_c_ffi_6989586621679505254 inline_c_ffi_6989586621679505262 inline_c_ffi_6989586621679505267 inline_c_ffi_6989586621679505630 inline_c_ffi_6989586621679505668 inline_c_ffi_6989586621679505676 inline_c_ffi_6989586621679505681 inline_c_ffi_6989586621679506044 inline_c_ffi_6989586621679506058 inline_c_ffi_6989586621679506066 inline_c_ffi_6989586621679506071 inline_c_ffi_6989586621679506290 inline_c_ffi_6989586621679506304 inline_c_ffi_6989586621679506312 inline_c_ffi_6989586621679506317 inline_c_ffi_6989586621679506536 inline_c_ffi_6989586621679506574 inline_c_ffi_6989586621679506582 inline_c_ffi_6989586621679506587 inline_c_ffi_6989586621679506950 inline_c_ffi_6989586621679506988 inline_c_ffi_6989586621679506996 inline_c_ffi_6989586621679507001 inline_c_ffi_6989586621679507364 inline_c_ffi_6989586621679507414 inline_c_ffi_6989586621679507422 inline_c_ffi_6989586621679507427 inline_c_ffi_6989586621679507862 inline_c_ffi_6989586621679507912 inline_c_ffi_6989586621679507920 inline_c_ffi_6989586621679507925 inline_c_ffi_6989586621679508360 inline_c_ffi_6989586621679508377 inline_c_ffi_6989586621679508385 inline_c_ffi_6989586621679508390 inline_c_ffi_6989586621679508627 inline_c_ffi_6989586621679508644 inline_c_ffi_6989586621679508652 inline_c_ffi_6989586621679508657 inline_c_ffi_6989586621679508894 inline_c_ffi_6989586621679508914 inline_c_ffi_6989586621679508922 inline_c_ffi_6989586621679508927 inline_c_ffi_6989586621679509182 inline_c_ffi_6989586621679509202 inline_c_ffi_6989586621679509210 inline_c_ffi_6989586621679509215 inline_c_ffi_6989586621679509470 inline_c_ffi_6989586621679509478 inline_c_ffi_6989586621679509483 inline_c_ffi_6989586621679509620 inline_c_ffi_6989586621679509628 inline_c_ffi_6989586621679509633#repa-3.4.1.2-DMB50ySXpC65Ocf6jv4ubmData.Array.Repa.BaseArray inline_c_ffi_6989586621679565232D:R:ArrayMshdepth0 inline_c_ffi_6989586621679573547 inline_c_ffi_6989586621679573557 inline_c_ffi_6989586621679573567 inline_c_ffi_6989586621679573577 inline_c_ffi_6989586621679573587 inline_c_ffi_6989586621679573597 inline_c_ffi_6989586621679573607 inline_c_ffi_6989586621679573617 inline_c_ffi_6989586621679573627 inline_c_ffi_6989586621679573637 inline_c_ffi_6989586621679573647 inline_c_ffi_6989586621679573657 inline_c_ffi_6989586621679573667 inline_c_ffi_6989586621679573677 inline_c_ffi_6989586621679573687 inline_c_ffi_6989586621679573697 inline_c_ffi_6989586621679573707 inline_c_ffi_6989586621679573717 inline_c_ffi_6989586621679573727 inline_c_ffi_6989586621679573737 inline_c_ffi_6989586621679573747 inline_c_ffi_6989586621679573757 inline_c_ffi_6989586621679573767 inline_c_ffi_6989586621679573777 inline_c_ffi_6989586621679573787 inline_c_ffi_6989586621679573797 inline_c_ffi_6989586621679573807 inline_c_ffi_6989586621679573817 inline_c_ffi_6989586621679573827 inline_c_ffi_6989586621679573837 inline_c_ffi_6989586621679573847 inline_c_ffi_6989586621679573857 inline_c_ffi_6989586621679573867 inline_c_ffi_6989586621679573877 inline_c_ffi_6989586621679573887 inline_c_ffi_6989586621679573897 inline_c_ffi_6989586621679573907 inline_c_ffi_6989586621679573917 inline_c_ffi_6989586621679573927 inline_c_ffi_6989586621679573937 inline_c_ffi_6989586621679573947 inline_c_ffi_6989586621679573957 inline_c_ffi_6989586621679573967 repaToM23 repaToM33 $fToMatV3 $fToMatV2 $fFromMatV3 $fFromMatV2 $fToMatVec $fToMatVec0 $fToMatVec1 $fToMatVec2 $fToMatVec3 $fToMatVec4 $fToMatVec5 $fToMatVec6 $fToMatVec7 $fToMatMatx $fToMatMatx0 $fToMatMatx1 $fToMatMatx2 $fToMatMatx3 $fToMatMatx4 $fToMatMatx5 $fToMatMatx6 $fToMatMatx7 $fToMatMatx8 $fToMatMatx9 $fToMatMatx10 $fToMatMatx11 $fToMatMatx12 $fToMatMatx13 $fToMatMatx14 $fToMatMatx15 $fToMatMatx16 $fToMatMatx17 $fToMatMatx18 $fToMatMatx19 $fToMatMatx20 $fToMatMatx21 $fToMatMatx22 $fToMatMatx23 $fToMatMatx24 $fToMatMatx25 $fToMatMatx26 $fToMatMatx27 $fToMatMatx28 $fToMatMatx29 $fToMatMatx30 $fToMatMatx31 $fToMatMatx32 $fFromMatMat $fToMatMat inline_c_ffi_6989586621679612974 inline_c_ffi_6989586621679612992 inline_c_ffi_6989586621679613025 inline_c_ffi_6989586621679613059 inline_c_ffi_6989586621679635390 inline_c_ffi_6989586621679635400 inline_c_ffi_6989586621679635419 inline_c_ffi_6989586621679635428 inline_c_ffi_6989586621679635507 inline_c_ffi_6989586621679635518 inline_c_ffi_6989586621679635540 inline_c_ffi_6989586621679635557 inline_c_ffi_6989586621679635576 windowNamewindowMouseCallbackwindowTrackbars TrackbarStatetrackbarCallbacktrackbarValuePtr freeTrackbarmatchEventFlagc'EVENT_FLAG_LBUTTONc'EVENT_FLAG_RBUTTONc'EVENT_FLAG_MBUTTONc'EVENT_FLAG_CTRLKEYc'EVENT_FLAG_SHIFTKEYc'EVENT_FLAG_ALTKEYc'EVENT_MOUSEMOVEc'EVENT_LBUTTONDOWNc'EVENT_RBUTTONDOWNc'EVENT_MBUTTONDOWNc'EVENT_LBUTTONUPc'EVENT_RBUTTONUPc'EVENT_MBUTTONUPc'EVENT_LBUTTONDBLCLKc'EVENT_RBUTTONDBLCLKc'EVENT_MBUTTONDBLCLKc'EVENT_MOUSEWHEELc'EVENT_MOUSEHWHEELunmarshalEvent inline_c_ffi_6989586621679655740 inline_c_ffi_6989586621679655756 inline_c_ffi_6989586621679655780 inline_c_ffi_6989586621679662992 inline_c_ffi_6989586621679667550 inline_c_ffi_6989586621679667569 inline_c_ffi_6989586621679667582 inline_c_ffi_6989586621679667592 inline_c_ffi_6989586621679667602 inline_c_ffi_6989586621679667612 inline_c_ffi_6989586621679667626 inline_c_ffi_6989586621679667641 inline_c_ffi_6989586621679667655 inline_c_ffi_6989586621679667673 inline_c_ffi_6989586621679667690 inline_c_ffi_6989586621679667699unVideoCapture inline_c_ffi_6989586621679674515 inline_c_ffi_6989586621679674525 inline_c_ffi_6989586621679674535 inline_c_ffi_6989586621679674545 inline_c_ffi_6989586621679674571 inline_c_ffi_6989586621679675025 inline_c_ffi_6989586621679675030 inline_c_ffi_6989586621679675075 inline_c_ffi_6989586621679675106 inline_c_ffi_6989586621679675116 inline_c_ffi_6989586621679675895 inline_c_ffi_6989586621679675900 inline_c_ffi_6989586621679675934 inline_c_ffi_6989586621679675956 inline_c_ffi_6989586621679675966 unKeyPointunDMatch newKeyPoint newDMatch inline_c_ffi_6989586621679694165 inline_c_ffi_6989586621679694175 inline_c_ffi_6989586621679694203 inline_c_ffi_6989586621679694214 inline_c_ffi_6989586621679694238 inline_c_ffi_6989586621679694249 inline_c_ffi_6989586621679694299 inline_c_ffi_6989586621679694327 inline_c_ffi_6989586621679694339 inline_c_ffi_6989586621679694412 inline_c_ffi_6989586621679694436 inline_c_ffi_6989586621679694448 inline_c_ffi_6989586621679694462 inline_c_ffi_6989586621679694472 inline_c_ffi_6989586621679694490 inline_c_ffi_6989586621679694508 inline_c_ffi_6989586621679694522 inline_c_ffi_6989586621679694570 inline_c_ffi_6989586621679694580 inline_c_ffi_6989586621679694590 inline_c_ffi_6989586621679694599 inline_c_ffi_6989586621679694608DrawMatchesParams matchColorsinglePointColorflagsunFlannBasedMatcher unBFMatcher BaseMatcher unBaseMatcherunSimpleBlobDetectorunOrbinfinity marshalWTA_Kc'HARRIS_SCORE c'FAST_SCOREmarshalOrbScoreTypenewOrbnewSimpleBlobDetectormarshalIndexParamsmarshallSearchParams inline_c_ffi_6989586621679757209 inline_c_ffi_6989586621679757219 inline_c_ffi_6989586621679757258 inline_c_ffi_6989586621679757269 inline_c_ffi_6989586621679757313 inline_c_ffi_6989586621679757332 inline_c_ffi_6989586621679757341unCascadeClassifier inline_c_ffi_6989586621679769835c'COLORMAP_AUTUMNc'COLORMAP_BONEc'COLORMAP_JETc'COLORMAP_WINTERc'COLORMAP_RAINBOWc'COLORMAP_OCEANc'COLORMAP_SUMMERc'COLORMAP_SPRINGc'COLORMAP_COOLc'COLORMAP_HSVc'COLORMAP_PINKc'COLORMAP_HOTc'COLORMAP_PARULAmarshalColorMap inline_c_ffi_6989586621679773723 inline_c_ffi_6989586621679773754 inline_c_ffi_6989586621679773798 inline_c_ffi_6989586621679773826 inline_c_ffi_6989586621679773858 inline_c_ffi_6989586621679773897 inline_c_ffi_6989586621679773929 inline_c_ffi_6989586621679773953 inline_c_ffi_6989586621679773996 inline_c_ffi_6989586621679774024 inline_c_ffi_6989586621679774057 inline_c_ffi_6989586621679774075c'LINE_8c'LINE_4 c'LINE_AAmarshalLineType marshalFont c'FONT_ITALICmarshalFontSlantc'FONT_HERSHEY_SIMPLEXc'FONT_HERSHEY_PLAINc'FONT_HERSHEY_DUPLEXc'FONT_HERSHEY_COMPLEXc'FONT_HERSHEY_TRIPLEXc'FONT_HERSHEY_COMPLEX_SMALLc'FONT_HERSHEY_SCRIPT_SIMPLEXc'FONT_HERSHEY_SCRIPT_COMPLEXmarshalFontFaceghc-prim GHC.TypesTruemarshalContourDrawMode inline_c_ffi_6989586621679865531 inline_c_ffi_6989586621679865575 inline_c_ffi_6989586621679865586 inline_c_ffi_6989586621679865626 inline_c_ffi_6989586621679865637 inline_c_ffi_6989586621679865675 inline_c_ffi_6989586621679865686GHC.WordWord8Word16Float inline_c_ffi_6989586621679893783 inline_c_ffi_6989586621679893824 inline_c_ffi_6989586621679893859 inline_c_ffi_6989586621679893873 inline_c_ffi_6989586621679893906 inline_c_ffi_6989586621679893928marshalFloodFillOperationFlags inline_c_ffi_6989586621679905975c'CV_TM_SQDIFFc'CV_TM_SQDIFF_NORMED c'CV_TM_CCORRc'CV_TM_CCORR_NORMEDc'CV_TM_CCOEFFc'CV_TM_CCOEFF_NORMEDmarshalMatchTemplateMethod inline_c_ffi_6989586621679910298 inline_c_ffi_6989586621679910322 inline_c_ffi_6989586621679910352 inline_c_ffi_6989586621679910380 inline_c_ffi_6989586621679910409 inline_c_ffi_6989586621679910421 inline_c_ffi_6989586621679910441 inline_c_ffi_6989586621679910454c'CV_RETR_EXTERNALc'CV_RETR_LISTc'CV_RETR_CCOMPc'CV_RETR_TREEc'CV_CHAIN_APPROX_NONEc'CV_CHAIN_APPROX_SIMPLEc'CV_CHAIN_APPROX_TC89_L1c'CV_CHAIN_APPROX_TC89_KCOSmarshalContourRetrievalMode!marshalContourApproximationMethodproduct0$fToHElemsDouble$fToHElemsFloat$fToHElemsInt32$fToHElemsInt16$fToHElemsWord16$fToHElemsInt8$fToHElemsWord8c'INTER_NEARESTc'INTER_LINEAR c'INTER_CUBIC c'INTER_AREAc'INTER_LANCZOS4marshalInterpolationMethodc'BORDER_CONSTANTc'BORDER_REPLICATEc'BORDER_REFLECT c'BORDER_WRAPc'BORDER_REFLECT_101c'BORDER_TRANSPARENTc'BORDER_ISOLATEDmarshalBorderMode inline_c_ffi_6989586621679994987 inline_c_ffi_6989586621679995025 inline_c_ffi_6989586621679995063 inline_c_ffi_6989586621679995077 inline_c_ffi_6989586621679995091 inline_c_ffi_6989586621679995109 inline_c_ffi_6989586621679995139 inline_c_ffi_6989586621679995161marshalResizeAbsRelc'WARP_FILL_OUTLIERSc'WARP_INVERSE_MAP inline_c_ffi_6989586621680012780 inline_c_ffi_6989586621680012797 inline_c_ffi_6989586621680012816 inline_c_ffi_6989586621680012843 inline_c_ffi_6989586621680012879 inline_c_ffi_6989586621680012910 inline_c_ffi_6989586621680012946 inline_c_ffi_6989586621680012985 inline_c_ffi_6989586621680013009 defaultAnchor c'MORPH_RECTc'MORPH_ELLIPSE c'MORPH_CROSSmarshalMorphShape c'MORPH_OPEN c'MORPH_CLOSEc'MORPH_GRADIENTc'MORPH_TOPHATc'MORPH_BLACKHATmarshalMorphOperationJunJ inline_c_ffi_6989586621680398484 inline_c_ffi_6989586621680398502 inline_c_ffi_6989586621680398524 inline_c_ffi_6989586621680398538 inline_c_ffi_6989586621680398560 inline_c_ffi_6989586621680398574 inline_c_ffi_6989586621680398584 inline_c_ffi_6989586621680398597 inline_c_ffi_6989586621680398607 inline_c_ffi_6989586621680398620 inline_c_ffi_6989586621680398630 inline_c_ffi_6989586621680398640unBackgroundSubtractorMOG2unBackgroundSubtractorKNNFalse inline_c_ffi_6989586621680410693 inline_c_ffi_6989586621680410703 inline_c_ffi_6989586621680410713 inline_c_ffi_6989586621680410727 inline_c_ffi_6989586621680410736 unVideoWriter inline_c_ffi_6989586621680414561 inline_c_ffi_6989586621680414579 inline_c_ffi_6989586621680414593 inline_c_ffi_6989586621680414611 inline_c_ffi_6989586621680414629 inline_c_ffi_6989586621680414647 inline_c_ffi_6989586621680414680 inline_c_ffi_6989586621680414702 inline_c_ffi_6989586621680414720 inline_c_ffi_6989586621680414742 inline_c_ffi_6989586621680414756 inline_c_ffi_6989586621680414774 inline_c_ffi_6989586621680414792 inline_c_ffi_6989586621680414810 inline_c_ffi_6989586621680414827 inline_c_ffi_6989586621680414844 inline_c_ffi_6989586621680414870 inline_c_ffi_6989586621680414891 inline_c_ffi_6989586621680414917 inline_c_ffi_6989586621680414950 inline_c_ffi_6989586621680414965 inline_c_ffi_6989586621680414987 inline_c_ffi_6989586621680415005 inline_c_ffi_6989586621680415019marshallFlipDirection inline_c_ffi_6989586621680454157 inline_c_ffi_6989586621680454186marshalFundamentalMatMethodmarshalWhichImageplusPtrSpeekSpokeSisoApply