OM      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                                   ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E FGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmno p q r s t u v w xyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                          !!!!!!!!!!!!!!!"""""""""""""""      !"###$#%#&#'#(#)#*#+#,#-#.#/#0#1#2#3#4#5#6#7#8#9#:#;#<#=#>#?$@$A$B$C$D$E$F$G$H$I$J$K$L$GNone*+,29:;<=DLQRST[ AType level to value level conversion of numbers that are either ynamically or tatically known. 7toNatDS (Proxy ('S 42)) == S 42 toNatDS (Proxy 'D) == DHeterogeneous listsImplemented as nested 2-tuples. >f :: Int ::: Bool ::: Char ::: Z f = 3 ::: False ::: 'X' ::: Z End of listynamically or tatically known valuesMainly used as a promoted type.Operationally exactly the M typeSomething is dynamically known.Something is statically known, in particular: a4Converts a DS value to the corresponding Maybe value'type level numbers are statically known)value level numbers are dynamically known+type level: reify the known natural number n value level: identity"N     N    55%Safe+,9:;DLQRST[COPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~COPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~COPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~&None+,9:;DILQRST[2%Auto detect == 0&#Video For Windows (platform native)'%V4L/V4L2 capturing support via libv4l(Same as CAP_V4L)IEEE 1394 drivers*Same as CAP_FIREWIRE+Same as CAP_FIREWIRE,Same as CAP_FIREWIRE-Same as CAP_FIREWIRE. QuickTime/Unicap drivers0DirectShow (via videoInput)1PvAPI, Prosilica GigE SDK2OpenNI (for Kinect)3OpenNI (for Asus Xtion)4Android - not used5XIMEA Camera API6AAVFoundation framework for iOS (OS X Lion will have the same API)7Smartek Giganetix GigEVisionSDK8+Microsoft Media Foundation (via videoInput)90Microsoft Windows Runtime using Media Foundation:Intel Perceptual Computing SDK;OpenNI2 (for Kinect)<8OpenNI2 (for Asus Xtion and Occipital Structure sensors)=gPhoto2 connection> GStreamer?=Open and record video file or stream using the FFMPEG library@"Image Sequence (e.g. img_%02d.jpg)B3Current position of the video file in milliseconds.C70-based index of the frame to be decoded/captured next.DORelative position of the video file: 0=start of the film, 1=end of the film.E(Width of the frames in the video stream.F)Height of the frames in the video stream.G Frame rate.H4-character code of codec.I#Number of frames in the video file.J?Format of the Mat objects returned by VideoCapture::retrieve().K;Backend-specific value indicating the current capture mode.L+Brightness of the image (only for cameras).M)Contrast of the image (only for cameras).N+Saturation of the image (only for cameras).O$Hue of the image (only for cameras).P%Gain of the image (only for cameras).QExposure (only for cameras).RCBoolean flags indicating whether images should be converted to RGB.SCurrently unsupported.TaRectification flag for stereo cameras (note: only supported by DC1394 v 2.x backend currently).W]DC1394: exposure control done by camera, user can adjust reference level using this feature.fpPop up video/camera filter dialog (note: only supported by DSHOW backend currently. Property value is ignored)iGAny property we need. Meaning of this property depends on the backend.M$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklK$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijkl$%&'()*+,-./0123456789:;<=>?@A(BCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklNone+,9:;DLQRST[I$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklIjklABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghi$%&'()*+,-./0123456789:;<=>?@'Safe+,9:;DLQRST[(None+,9:;DLQRST[sWrapper for mutable values mnopqrs mnpoqrsmnopqrs)None+,9:;DLQRST[*None+,9:;DLQRST[w!Compression (run length encoding){Binary~#Quality [1..100], > 100 == lossless0..100Btuvwxyz{|}~,tuvwxyz{|}~"t uvwxyz{|}~+None+,9:;<=DLQRST[ ,None+,9:;DLQRST[Number of channels-Safe+,9:;DLQRST[.Safe+,9:;DLQRST[Normalization typeComparison type"/Safe+,9:;DLQRST[0None+,9:;DLQRST[[Types of which a value can be constructed from a pointer to the C equivalent of that value!Used to wrap values created in C.BPerform an IO action with a pointer to the C equivalent of a valuePPerform an action with a temporary pointer to the underlying representation of apThe pointer is not guaranteed to be usuable outside the scope of this function. The same warnings apply as for withForeignPtr.Equivalent type in CJActually a proxy type in Haskell that stands for the equivalent type in C.9Information about the storage requirements of values in C!This class assumes that the type a7 is merely a symbol that corresponds with a type in C.@Computes the storage requirements (in bytes) of values of type a in C.Callback function for trackbars"Callback function for mouse events$Haskell representation of an OpenCV cv::CascadeClassifier object $Haskell representation of an OpenCV cv::VideoWriter object $Haskell representation of an OpenCV cv::VideoCapture object $Haskell representation of an OpenCV cv::Ptr cv::BackgroundSubtractorKNN object $Haskell representation of an OpenCV cv::Ptr cv::BackgroundSubtractorMOG2 object $Haskell representation of an OpenCV cv::FlannBasedMatcher object$Haskell representation of an OpenCV  cv::BFMatcher object$Haskell representation of an OpenCV cv::DescriptorMatcher object$Haskell representation of an OpenCV cv::Ptr cv::SimpleBlobDetector object$Haskell representation of an OpenCV cv::Ptr cv::ORB object$Haskell representation of an OpenCV  cv::DMatch object$Haskell representation of an OpenCV  cv::Keypoint object$Haskell representation of an OpenCV cv::Mat object$Haskell representation of an OpenCV cv::Scalar_<double> object$Haskell representation of an OpenCV  cv::Range object$Haskell representation of an OpenCV cv::TermCriteria object$Haskell representation of an OpenCV cv::RotatedRect object-Haskell representation of an OpenCV exceptionHMutable types use the same underlying representation as unmutable types. is represented as a .h+Current position of the specified trackbar.Optional pointer to user data. One of the cv::MouseEvenTypes constants.$The x-coordinate of the mouse event.$The y-coordinate of the mouse event. One of the cv::MouseEventFlags constants.Optional pointer to user data.      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghi[      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^e      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghi1None+,9:;<=DLQRST[ jkl jkjkl2None*+,9:;<=DLQRST[ mnopq mnmnopq3None*+,9:;<=DLQRST[ rstursrstu4None+,9:;<=DLQRST[ vwxyz{vwvwxyz{5Safe+,9:;DLQRST[<Copy source to destination using C++'s placement new feature|<Copy source to destination using C++'s placement new featureLThis method is intended for types that are proxies for actual types in C++. new(dst) CType(*src)WThe copy should be performed by constructing a new object in the memory pointed to by dst4. The new object is initialised using the value of src\. This design allow underlying structures to be shared depending on the implementation of CType.|}|}|}6None+,9:;DLQRST[~~~7None+,9:;DLQRST[.Context useful to work with the OpenCV library Based on ,  and .8: converts OpenCV basic types to their counterparts in OpenCV.Internal.C.Inline.No .8Safe+,9:;DLQRST[9None+,9:;DLQRST[&Matx type name, for both Haskell and C Row dimensionColumn dimensionDepth type name in HaskellDepth type name in C:None+,9:;DLQRST['Point type name, for both Haskell and CPoint dimensionPoint template name in CDepth type name in HaskellDepth type name in C;None+,9:;DLQRST[&Size type name, for both Haskell and CDepth type name in HaskellDepth type name in C<None+,9:;DLQRST[%Vec type name, for both Haskell and C Vec dimensionDepth type name in HaskellDepth type name in C=None+,29:;DLOQRST[" None+,29:;DLOQRST[ None*+,9:;<=DLQRST[?    ?   None+,9:;<=DLQRST[#$###$ None+,9:;<=DLQRST[*()-.2378<=AB(-27<A(-27<A*()-.2378<=AB>None+,9:;<=DLQRST[J.A continuous subsequence (slice) of a sequence@The type is used to specify a row or a column span in a matrix (Mat ) and for many other purposes. mkRange a b is basically the same as a:b in Matlab or a..b in Python. As in Python, start is an inclusive left boundary of the range and end is an exclusive right boundary of the range. Such a half-opened interval is usually denoted as  [start, end). Phttp://docs.opencv.org/3.0-last-rst/modules/core/doc/basic_structures.html#rangeOpenCV Sphinx docK-Termination criteria for iterative algorithms Whttp://docs.opencv.org/3.0-last-rst/modules/core/doc/basic_structures.html#termcriteriaOpenCV Sphinx docL1Rotated (i.e. not up-right) rectangles on a planedEach rectangle is specified by the center point (mass center), length of each side (represented by $) and the rotation angle in degrees. Vhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/basic_structures.html#rotatedrectOpenCV Sphinx docM6A 4-element vector with 64 bit floating point elements The type M/ is widely used in OpenCV to pass pixel values. Qhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/basic_structures.html#scalarOpenCV Sphinx docSpecial J< value which means "the whole sequence" or "the whole range"@Perform an action with a temporary pointer to an array of values>The input values are placed consecutively in memory using the  mechanism.wThis function is intended for types which are not managed by the Haskell runtime, but by a foreign system (such as C).pThe pointer is not guaranteed to be usuable outside the scope of this function. The same warnings apply as for .2FGHIJKLM    Rectangle mass center!Width and height of the rectanglevThe rotation angle (in degrees). When the angle is 0, 90, 180, 270 etc., the rectangle becomes an up-right rectangle. 5Optionally the maximum number of iterations/elements. Optionally the desired accuracy.Inclusive start Exlusive endFGHIJKLM     (FGHIJKLM     ?None*+,9:;DLOQRST[ \Tests whether a [* is deserving of its type level attributes2Checks if the properties encoded in the type of a [ correspond to the value level representation. For each property that does not hold this function will produce an error message. If everything checks out it will produce an empty list.%The following properties are checked:DimensionalitySize of each dimensionNumber of channelsDepth (data type of elements);If a property is explicitly encoded as statically unknown (ynamic) it will not be checked.]"Relaxes the type level constraintsOOnly identical or looser constraints are allowed. For tighter constraints use ^.MThis allows you to 'forget' type level guarantees for zero cost. Similar to _, but totally safe.  Identicala to b with a ~ bLooser(' a) to ' or (' a) to (' b) with  a bTighter' to (' a) Similar to  in that it keeps the  L alive during the execution of the given action but it doesn't extract the ! from the  ."RAll possible positions (indexes) for a given shape (list of sizes per dimension). ydimPositions [3, 4] [ [0, 0], [0, 1], [0, 2], [0, 3] , [1, 0], [1, 1], [1, 2], [1, 3] , [2, 0], [2, 1], [2, 2], [2, 3] ] # fold over $empty %&fold over the type level list'empty %(direct conversion to %)identity=*+,-./01NOPQRSTUVWXYZ[23\The matrix to be checked.Error messages.] Original [.[ with relaxed constraints.^_45678`a9bThe matrix to be checked.Error messages.c Original [.[ with relaxed constraints.defghij"kl:;#$&'()<=>?:NOPQRSTUVWXZY[23\]^_4578`abcdefghij"kl5*+,-./01NOPQRSTUVWXYZ[23\]^_45678`a9bcdefghij"kl:;#$&'()<=>?None+,9:;DLQRST[mpositionchannelnpositionchannelqp_emn_epqmnmn None+,9:;DLQRST[pNavier-Stokes based method.qMethod by Alexandru Telea.rGRestores the selected region in an image using the region neighborhood.Example: inpaintImg :: forall h h2 w w2 c d . ( Mat (ShapeT [h, w]) ('S c) ('S d) ~ Bikes_512x341 , h2 ~ ((*) h 2) , w2 ~ ((*) w 2) ) => Mat ('S ['S h2, 'S w2]) ('S c) ('S d) inpaintImg = exceptError $ do maskInv <- bitwiseNot mask maskBgr <- cvtColor gray bgr maskInv damaged <- bitwiseAnd bikes_512x341 maskBgr repairedNS <- inpaint 3 InpaintNavierStokes damaged mask repairedT <- inpaint 3 InpaintTelea damaged mask withMatM (Proxy :: Proxy [h2, w2]) (Proxy :: Proxy c) (Proxy :: Proxy d) black $ \imgM -> do matCopyToM imgM (V2 0 0) damaged Nothing matCopyToM imgM (V2 w 0) maskBgr Nothing matCopyToM imgM (V2 0 h) repairedNS Nothing matCopyToM imgM (V2 w h) repairedT Nothing where mask = damageMask w = fromInteger $ natVal (Proxy :: Proxy w) h = fromInteger $ natVal (Proxy :: Proxy h)  %doc/generated/examples/inpaintImg.png inpaintImgs{Perform fastNlMeansDenoising function for colored images. Denoising is not per channel but in a different colour spaceExample: OfastNlMeansDenoisingColoredImg :: forall h w w2 c d . ( Mat (ShapeT [h, w]) ('S c) ('S d) ~ Lenna_512x512 , w2 ~ ((*) w 2) ) => Mat ('S ['S h, 'S w2]) ('S c) ('S d) fastNlMeansDenoisingColoredImg = exceptError $ do denoised <- fastNlMeansDenoisingColored 3 10 7 21 lenna_512x512 withMatM (Proxy :: Proxy [h, w2]) (Proxy :: Proxy c) (Proxy :: Proxy d) black $ \imgM -> do matCopyToM imgM (V2 0 0) lenna_512x512 Nothing matCopyToM imgM (V2 w 0) denoised Nothing where w = fromInteger $ natVal (Proxy :: Proxy w)  9doc/generated/examples/fastNlMeansDenoisingColoredImg.pngfastNlMeansDenoisingColoredImgtPerform fastNlMeansDenoisingColoredMulti function for colored images. Denoising is not pre channel but in a different colour space. This wrapper differs from the original OpenCV version by using all input images and denoising the middle one. The original version would allow to have some arbitrary length vector and slide window over it. As we have to copy the haskell vector before we can use it as  `std::vector`X on the cpp side it is easier to trim the vector before sending and use all frames.Example: lfastNlMeansDenoisingColoredMultiImg :: forall h w w2 c d . ( Mat (ShapeT [h, w]) ('S c) ('S d) ~ Lenna_512x512 , w2 ~ ((*) w 2) ) => Mat ('S ['S h, 'S w2]) ('S c) ('S d) fastNlMeansDenoisingColoredMultiImg = exceptError $ do denoised <- fastNlMeansDenoisingColoredMulti 3 10 7 21 (V.singleton lenna_512x512) withMatM (Proxy :: Proxy [h, w2]) (Proxy :: Proxy c) (Proxy :: Proxy d) black $ \imgM -> do matCopyToM imgM (V2 0 0) lenna_512x512 Nothing matCopyToM imgM (V2 w 0) denoised Nothing where w = fromInteger $ natVal (Proxy :: Proxy w)  >doc/generated/examples/fastNlMeansDenoisingColoredMultiImg.png#fastNlMeansDenoisingColoredMultiImguPerform denoise_TVL1Example: = denoise_TVL1Img :: forall h w w2 c d . ( Mat (ShapeT [h, w]) ('S c) ('S d) ~ Lenna_512x512 , w2 ~ ((*) w 2) ) => Mat ('S ['S h, 'S w2]) ('S c) ('S d) denoise_TVL1Img = exceptError $ do denoised <- matChannelMapM (denoise_TVL1 2 50 . V.singleton) lenna_512x512 withMatM (Proxy :: Proxy [h, w2]) (Proxy :: Proxy c) (Proxy :: Proxy d) black $ \imgM -> do matCopyToM imgM (V2 0 0) lenna_512x512 Nothing matCopyToM imgM (V2 w 0) denoised Nothing where w = fromInteger $ natVal (Proxy :: Proxy w)  *doc/generated/examples/denoise_TVL1Img.pngdenoise_TVL1ImgvPerform decolorVDecolor a color image to a grayscale (1 channel) and a color boosted image (3 channel)Example: decolorImg :: forall h h2 w w2 c d . ( Mat (ShapeT [h, w]) ('S c) ('S d) ~ Bikes_512x341 , h2 ~ ((*) h 2) , w2 ~ ((*) w 2) ) => Mat ('S ['S h2, 'S w2]) ('S c) ('S d) decolorImg = exceptError $ do (bikesGray, boost) <- decolor bikes_512x341 colorGray <- cvtColor gray bgr bikesGray withMatM (Proxy :: Proxy [h2, w2]) (Proxy :: Proxy c) (Proxy :: Proxy d) white $ \imgM -> do matCopyToM imgM (V2 0 0) bikes_512x341 Nothing matCopyToM imgM (V2 0 h) colorGray Nothing matCopyToM imgM (V2 w h) boost Nothing where w = fromInteger $ natVal (Proxy :: Proxy w) h = fromInteger $ natVal (Proxy :: Proxy h)  %doc/generated/examples/decolorImg.png decolorImg@ABCDopqEroinpaintRadius - Radius of a circular neighborhood of each point inpainted that is considered by the algorithm. Input image.Inpainting mask. Output image.sParameter regulating filter strength for luminance component. Bigger h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noiseThe same as h but for color components. For most images value equals 10 will be enough to remove colored noise and do not distort colorstemplateWindowSize Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixelssearchWindowSize. Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels"Input image 8-bit 3-channel image.)Output image same size and type as input.tParameter regulating filter strength for luminance component. Bigger h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noiseThe same as h but for color components. For most images value equals 10 will be enough to remove colored noise and do not distort colorstemplateWindowSize Size in pixels of the template patch that is used to compute weights. Should be odd. Recommended value 7 pixelssearchWindowSize. Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time. Recommended value 21 pixels5Vector of odd number of input 8-bit 3-channel images.)Output image same size and type as input.udetails more is more 20Number of iterations that the algorithm will run5Vector of odd number of input 8-bit 3-channel images.)Output image same size and type as input.v Input image.Output images.opqrstuvopqvrust @ABCDopqErstuv@None*+,2349:;<=DLQRST[,Native Haskell represenation of a rectangle.xyz{|}~FGHIJKLMxyz{|}~FG xyz{|}~FGHIJKLMANone+,9:;DLQRST[N+Rectangle type name, for both Haskell and CDepth type name in HaskellDepth type name in CPoint type name in CSize type name in CNNNone*+,9:;<=DLQRST["OPQRSTUVWXYZ[\]^_`abcdefghixyz{|}~xyz{|}~"OPQRSTUVWXYZ[\]^_`abcdefghiBNone+,9:;DLQRST[Initialize the state and the mask using the provided rectangle. After that, run iterCount iterations of the algorithm. The rectangle represents a ROI containing a segmented object. The pixels outside of the ROI are marked as obvious background .-Initialize the state using the provided mask.Combination of GCInitWithRect and GCInitWithMaskN. All the pixels outside of the ROI are automatically initialized with GC_BGD.Just resume the algorithm.jklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>jklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>jklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>CNone+,9:;<=DLQRST[bHGives the number of channels associated with a particular color encodingNames of color encodings( B) Bayer pattern with BG in the second row, second and third column( B) Bayer pattern with GB in the second row, second and third column( B) Bayer pattern with GR in the second row, second and third column(B) Bayer pattern with RG in the second row, second and third column(2) 24 bit RGB color space with channels: (B8:G8:R8)() 15 bit RGB color space() 16 bit RGB color space(6) 32 bit RGBA color space with channels: (B8:G8:R8:A8)()()()()()()()()()()()()( ) Edge-Aware( )(!)(")(#)($)(%)(&)(')(()())(*)(+)(,)(-)(.") 8 bit single channel color space(/)(0)(1)(2)(3)(4)(5)(6)(7)(8)(9)(:)(;)(<)(=)(>)(?)(@)(A)(B)(C)(D)(E2) 24 bit RGB color space with channels: (R8:G8:B8)(F)(G)(H)(I)(J)(K)(L)(M)(N)(O)(P)(Q)(R)(S ) Edge-Aware(T)(U)(V)(W)(X)(Y)(Z)([)(\)(])(^)(_)(`)(a)(b)(c)(d)(e)(f)(g)(h) (i) 9Valid color conversions described by the following graph: doc/color_conversions.png  ?    !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghi@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~        ?    !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghi+_  ?    !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghi@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      None+,9:;DLQRST[      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghi      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghiNone+,9:;<=DLQRST[                     ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~  jklopqtuvyz{~  Ijkoptuyz~ Ijoty~ kpuz                     ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~  jklopqtuvyz{~  None*+,-9:;<=ADLQRST[Representation tag for Repa  s for OpenCV [s.Converts an OpenCV [rix into a Repa array.This is a zero-copy operation.     DNone+,9:;DLQRST[e  c  None*+,9:;<=DLQRST[Identity matrix Rhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/basic_structures.html#mat-eyeOpenCV Sphinx doc -Extract a sub region from a 2D-matrix (image)Example: matSubRectImg :: Mat ('S ['D, 'D]) ('S 3) ('S Word8) matSubRectImg = exceptError $ withMatM (h ::: 2 * w ::: Z) (Proxy :: Proxy 3) (Proxy :: Proxy Word8) white $ \imgM -> do matCopyToM imgM (V2 0 0) birds_512x341 Nothing matCopyToM imgM (V2 w 0) subImg Nothing lift $ rectangle imgM subRect blue 1 LineType_4 0 lift $ rectangle imgM (toRect $ HRect (V2 w 0) (V2 w h) :: Rect2i) blue 1 LineType_4 0 where subRect = toRect $ HRect (V2 96 131) (V2 90 60) subImg = exceptError $ resize (ResizeAbs $ toSize $ V2 w h) InterCubic =<< matSubRect birds_512x341 subRect [h, w] = miShape $ matInfo birds_512x341  (doc/generated/examples/matSubRectImg.png matSubRectImg"<Converts an array to another data type with optional scaling lhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/basic_structures.html?highlight=convertto#mat-converttoOpenCV Sphinx doc#9Create a matrix whose elements are defined by a function.Example: ;matFromFuncImg :: forall size. (size ~ 300) => Mat (ShapeT [size, size]) ('S 4) ('S Word8) matFromFuncImg = exceptError $ matFromFunc (Proxy :: Proxy [size, size]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) example where example [y, x] 0 = 255 - normDist (V2 x y ^-^ bluePt ) example [y, x] 1 = 255 - normDist (V2 x y ^-^ greenPt) example [y, x] 2 = 255 - normDist (V2 x y ^-^ redPt ) example [y, x] 3 = normDist (V2 x y ^-^ alphaPt) example _pos _channel = error "impossible" normDist :: V2 Int -> Word8 normDist v = floor $ min 255 $ 255 * Linear.norm (fromIntegral <$> v) / s' bluePt = V2 0 0 greenPt = V2 s s redPt = V2 s 0 alphaPt = V2 0 s s = fromInteger $ natVal (Proxy :: Proxy size) :: Int s' = fromIntegral s :: Double  )doc/generated/examples/matFromFuncImg.pngmatFromFuncImg%Transforms a given list of matrices of equal shape, channels, and depth, by folding the given function over all matrix elements at each position.  !"Optional scale factor.*Optional delta added to the scaled values.#$%>onNOPQRSTUVWXZY[\]^`abcdfghijkl !"#$%>[\]^`a !"#bcdnofghi$%VWXYZjUTRSPQOkNl  !"#$%None+,9:;DLQRST[&Callback function for trackbars'"Callback function for mouse events("More convenient representation of 00Context for a mouse 1RInformation about which buttons and modifier keys where pressed during the event.?)Create a window with the specified title.<Make sure to free the window when you're done with it using @ or better yet: use A.@FClose the window and free up all resources associated with the window.AwithWindow title act# makes a window with the specified title and passes the resulting > to the computation act-. The window will be destroyed on exit from  withWindowU whether by normal termination or by raising an exception. Make sure not to use the Window outside the act computation!B&Resize a window to the specified size.P &+Current position of the specified trackbar.'0What happened to cause the callback to be fired.$The x-coordinate of the mouse event.$The y-coordinate of the mouse event.SContext for the event, such as buttons and modifier keys pressed during the event.()*+,-./0 123456789:;<=>     ?@ABC    DEFGHIJ            KL Trackbar name Initial value Maximum valueMN)&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMN)>?@ABC123456789:;<=0DEFGHI()*+,-./J'K&LMN5 &'()*+,-./0 1 23456789:;<=>     ?@ABC    DEFGHIJ            KLMNNone+,9:;DLQRST[Q'Reads an image from a buffer in memory.The function reads an image from the specified buffer in the memory. If the buffer is too short or contains invalid data, the empty matrix/image is returned. bhttp://docs.opencv.org/3.0-last-rst/modules/imgcodecs/doc/reading_and_writing_images.html#imdecodeOpenCV Sphinx docS&Encodes an image into a memory buffer.WARNING:" This function is not thread safe! bhttp://docs.opencv.org/3.0-last-rst/modules/imgcodecs/doc/reading_and_writing_images.html#imencodeOpenCV Sphinx docT&Encodes an image into a memory buffer.See S   QRST+tuvwxyz{|}~QRST+QRtuvwxyz{|}~ST   QRSTNone+,9:;DLQRST[UCComputes an optimal affine transformation between two 2D point sets uhttp://docs.opencv.org/3.0-last-rst/modules/video/doc/motion_analysis_and_object_tracking.html#estimaterigidtransformOpenCV Sphinx doc USource Destination Full affineUU UNone+,9:;DLQRST[WVideoFile and backendXVideoDevice and backend   ! " # $ % & ' ( )VWXY * +Z[\]^_`abcdeVWXYZ[\]^_`abcYVWXZ[\]^_`abc   ! " # $ % & ' ( )VWXY * +Z[\]^_`abcdeNone!"+,9:;DLQRST[f*Data structure for salient point detectors Shttp://docs.opencv.org/3.0-last-rst/modules/core/doc/basic_structures.html#keypointOpenCV Sphinx dochRectangle mass centeri!Width and height of the rectanglejThe rotation angle (in degrees)UWhen the angle is 0, 90, 180, 270 etc., the rectangle becomes an up-right rectangle.k?The minimal up-right rectangle containing the rotated rectanglepClass for matching keypoint descriptors: query descriptor index, train descriptor index, train image index, and distance between descriptors Qhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/basic_structures.html#dmatchOpenCV Sphinx DocsCoordinates of the keypoints.t1Diameter of the meaningful keypoint neighborhood.uComputed orientation of the keypoint (-1 if not applicable); it's in [0,360) degrees and measured relative to image coordinate system, ie in clockwise.vxThe response by which the most strong keypoints have been selected. Can be used for the further sorting or subsampling.wBOctave (pyramid layer) from which the keypoint has been extracted.xRObject class (if the keypoints need to be clustered by an object they belong to).Query descriptor index.Train descriptor index.Train image index.= , - . / 0 1 2 3 4 5 6 7 8 9 :f ; <gRectangle mass center!Width and height of the rectanglevThe rotation angle (in degrees). When the angle is 0, 90, 180, 270 etc., the rectangle becomes an up-right rectangle.hijklm5Optionally the maximum number of iterations/elements. Optionally the desired accuracy.nop = >qrstuvwx ?yz{|}~ @mnpoqrs #(-27<AFGHIJKLMNOPQRSTUVWXZY[\]^`abcdfghijklxyz{|}~jkoptuyz~  !"#$%fghijklmnopqrstuvwxyz3srmnopqMHIFGLghijklKmJnofqrstuvwxyzp+ , - . / 0 1 2 3 4 5 6 7 8 9 :f ; <ghijklmnop = >qrstuvwx ?yz{|}~ @None!"+,9:;DLQRST[Flann-based descriptor matcher.This matcher trains  flann::Index_ on a train descriptor collection and calls it nearest search methods to find the best matches. So, this matcher may be faster when matching a large train collection than the brute force matcher. FlannBasedMatcherl does not support masking permissible matches of descriptor sets because flann::Index does not support this.Example: NfbMatcherImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Frog , width2 ~ (*) width 2 ) => IO (Mat (ShapeT [height, width2]) ('S channels) ('S depth)) fbMatcherImg = do let (kpts1, descs1) = exceptError $ orbDetectAndCompute orb frog Nothing (kpts2, descs2) = exceptError $ orbDetectAndCompute orb rotatedFrog Nothing fbmatcher <- newFlannBasedMatcher (def { indexParams = FlannLshIndexParams 20 10 2 }) matches <- match fbmatcher descs1 -- Query descriptors descs2 -- Train descriptors Nothing exceptErrorIO $ pureExcept $ withMatM (Proxy :: Proxy [height, width2]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do matCopyToM imgM (V2 0 0) frog Nothing matCopyToM imgM (V2 width 0) rotatedFrog Nothing -- Draw the matches as lines from the query image to the train image. forM_ matches $ \dmatch -> do let matchRec = dmatchAsRec dmatch queryPt = kpts1 V.! fromIntegral (dmatchQueryIdx matchRec) trainPt = kpts2 V.! fromIntegral (dmatchTrainIdx matchRec) queryPtRec = keyPointAsRec queryPt trainPtRec = keyPointAsRec trainPt -- We translate the train point one width to the right in order to -- match the position of rotatedFrog in imgM. line imgM (round <$> kptPoint queryPtRec :: V2 Int32) ((round <$> kptPoint trainPtRec :: V2 Int32) ^+^ V2 width 0) blue 1 LineType_AA 0 where orb = mkOrb defaultOrbParams {orb_nfeatures = 50} width = fromInteger $ natVal (Proxy :: Proxy width) rotatedFrog = exceptError $ warpAffine frog rotMat InterArea False False (BorderConstant black) rotMat = getRotationMatrix2D (V2 250 195 :: V2 CFloat) 45 0.8  'doc/generated/examples/fbMatcherImg.png fbMatcherImg zhttp://docs.opencv.org/3.0-last-rst/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html#flannbasedmatcherOpenCV Sphinx docBrute-force descriptor matcherFor each descriptor in the first set, this matcher finds the closest descriptor in the second set by trying each one. This descriptor matcher supports masking permissible matches of descriptor sets.Example: $bfMatcherImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Frog , width2 ~ (*) width 2 ) => IO (Mat (ShapeT [height, width2]) ('S channels) ('S depth)) bfMatcherImg = do let (kpts1, descs1) = exceptError $ orbDetectAndCompute orb frog Nothing (kpts2, descs2) = exceptError $ orbDetectAndCompute orb rotatedFrog Nothing bfmatcher <- newBFMatcher Norm_Hamming True matches <- match bfmatcher descs1 -- Query descriptors descs2 -- Train descriptors Nothing exceptErrorIO $ pureExcept $ withMatM (Proxy :: Proxy [height, width2]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do matCopyToM imgM (V2 0 0) frog Nothing matCopyToM imgM (V2 width 0) rotatedFrog Nothing -- Draw the matches as lines from the query image to the train image. forM_ matches $ \dmatch -> do let matchRec = dmatchAsRec dmatch queryPt = kpts1 V.! fromIntegral (dmatchQueryIdx matchRec) trainPt = kpts2 V.! fromIntegral (dmatchTrainIdx matchRec) queryPtRec = keyPointAsRec queryPt trainPtRec = keyPointAsRec trainPt -- We translate the train point one width to the right in order to -- match the position of rotatedFrog in imgM. line imgM (round <$> kptPoint queryPtRec :: V2 Int32) ((round <$> kptPoint trainPtRec :: V2 Int32) ^+^ V2 width 0) blue 1 LineType_AA 0 where orb = mkOrb defaultOrbParams {orb_nfeatures = 50} width = fromInteger $ natVal (Proxy :: Proxy width) rotatedFrog = exceptError $ warpAffine frog rotMat InterArea False False (BorderConstant black) rotMat = getRotationMatrix2D (V2 250 195 :: V2 CFloat) 45 0.8  'doc/generated/examples/bfMatcherImg.png bfMatcherImg rhttp://docs.opencv.org/3.0-last-rst/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html#bfmatcherOpenCV Sphinx docMatch in pre-trained matcher%Extracted blobs have an area between minArea (inclusive) and maxArea (exclusive).$Extracted blobs have circularity '(4 * pi * Area)/(perimeter * perimeter) between minCircularity (inclusive) and maxCircularity (exclusive).SThis filter compares the intensity of a binary image at the center of a blob to  blobColor3. If they differ, the blob is filtered out. Use  blobColor = 0 to extract dark blobs and blobColor = 255 to extract light blobs.LExtracted blobs have convexity (area / area of blob convex hull) between  minConvexity (inclusive) and  maxConvexity (exclusive).(Extracted blobs have this ratio between minInertiaRatio (inclusive) and maxInertiaRatio (exclusive).)The maximum number of features to retain.*Pyramid decimation ratio, greater than 1. M == 2 means the classical pyramid, where each next level has 4x less pixels than the previous, but such a big scale factor will degrade feature matching scores dramatically. On the other hand, too close to 1 scale factor will mean that to cover certain scale range you will need more pyramid levels and so the speed will suffer.kThe number of pyramid levels. The smallest level will have linear size equal to input_image_linear_size /  ** .qThis is size of the border where the features are not detected. It should roughly match the patchSize parameter.-It should be 0 in the current implementation.dThe number of points that produce each element of the oriented BRIEF descriptor. The default value  means the BRIEF where we take a random point pair and compare their brightnesses, so we get 0/1 response. Other possible values are  and . For example,  means that we take 3 random points (of course, those point coordinates are random, but they are generated from the pre-defined seed, so each element of BRIEF descriptor is computed deterministically from the pixel rectangle), find point of maximum brightness and output index of the winner (0, 1 or 2). Such output will occupy 2 bits, and therefore it will need a special variant of Hamming distance, denoted as  (2 bits per bin). When p, we take 4 random points to compute each bin (that will also occupy 2 bits with possible values 0, 1, 2 or 3). The default  means that Harris algorithm is used to rank features (the score is written to KeyPoint::score and is used to retain best nfeatures features); | is alternative value of the parameter that produces slightly less stable keypoints, but it is a little faster to compute.Size of the patch used by the oriented BRIEF descriptor. Of course, on smaller pyramid layers the perceived image area covered by a feature will be larger.(Detect keypoints and compute descriptorsExample:  orbDetectAndComputeImg :: forall (width :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . (Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Frog) => Mat (ShapeT [height, width]) ('S channels) ('S depth) orbDetectAndComputeImg = exceptError $ do (kpts, _descs) <- orbDetectAndCompute orb frog Nothing withMatM (Proxy :: Proxy [height, width]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do void $ matCopyToM imgM (V2 0 0) frog Nothing forM_ kpts $ \kpt -> do let kptRec = keyPointAsRec kpt circle imgM (round <$> kptPoint kptRec :: V2 Int32) 5 blue 1 LineType_AA 0 where orb = mkOrb defaultOrbParams  1doc/generated/examples/orbDetectAndComputeImg.pngorbDetectAndComputeImg(Detect keypoints and compute descriptors A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k lImage.Mask. mImage.Mask. and > norms are preferable choices for SIFT and SURF descriptors,  should be used with , BRISK and BRIEF,  should be used with  when  or  (see ).(If it is false, this is will be default n behaviour when it finds the k nearest neighbors for each query descriptor. If crossCheck == True, then the  knnMatch() method with k=1 will only return pairs (i,j) such that for i-th query descriptor the j-th descriptor in the matcher's collection is the nearest and vice versa, i.e. the  will only return consistent pairs. Such technique usually produces best results with minimal number of outliers when there are enough matches. This is alternative to the ratio test, used by D. Lowe in SIFT paper. n oTTJ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b  c d  e f g h i j k l m n oNone+,9:;<=DLQRST[)Create a new cascade classifier. Returns  if the classifier is empty after initialization. This usually means that the file could not be loaded (e.g. it doesn't exist, is corrupt, etc.)Example: LcascadeClassifierArnold :: forall (width :: Nat) (height :: Nat) (channels :: Nat) (depth :: * ) . (Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Arnold_small) => IO (Mat (ShapeT [height, width]) ('S channels) ('S depth)) cascadeClassifierArnold = do -- Create two classifiers from data files. Just ccFrontal <- newCascadeClassifier "data/haarcascade_frontalface_default.xml" Just ccEyes <- newCascadeClassifier "data/haarcascade_eye.xml" -- Detect some features. let eyes = ccDetectMultiscale ccEyes arnoldGray faces = ccDetectMultiscale ccFrontal arnoldGray -- Draw the result. pure $ exceptError $ withMatM (Proxy :: Proxy [height, width]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do void $ matCopyToM imgM (V2 0 0) arnold_small Nothing forM_ eyes $ \eyeRect -> lift $ rectangle imgM eyeRect blue 2 LineType_8 0 forM_ faces $ \faceRect -> lift $ rectangle imgM faceRect green 2 LineType_8 0 where arnoldGray = exceptError $ cvtColor bgr gray arnold_small ccDetectMultiscale cc = cascadeClassifierDetectMultiScale cc Nothing Nothing minSize maxSize minSize = Nothing :: Maybe (V2 Int32) maxSize = Nothing :: Maybe (V2 Int32)  2doc/generated/examples/cascadeClassifierArnold.pngcascadeClassifierArnoldPSpecial version which returns bounding rectangle, rejectLevels, and levelWeights p q r s t u v w xScale factor, default is 1.1Min neighbours, default 3"Minimum size. Default: no minimum."Maximum size. Default: no maximum.Scale factor, default is 1.1Min neighbours, default 3"Minimum size. Default: no minimum."Maximum size. Default: no maximum. p q r s t u v w xNone+,9:;DLQRST[ +doc/generated/examples/colorMapAutumImg.pngcolorMapAutumImg *doc/generated/examples/colorMapBoneImg.pngcolorMapBoneImg )doc/generated/examples/colorMapJetImg.pngcolorMapJetImg ,doc/generated/examples/colorMapWinterImg.pngcolorMapWinterImg -doc/generated/examples/colorMapRainbowImg.pngcolorMapRainbowImg +doc/generated/examples/colorMapOceanImg.pngcolorMapOceanImg ,doc/generated/examples/colorMapSummerImg.pngcolorMapSummerImg ,doc/generated/examples/colorMapSpringImg.pngcolorMapSpringImg  *doc/generated/examples/colorMapCoolImg.pngcolorMapCoolImg  )doc/generated/examples/colorMapHsvImg.pngcolorMapHsvImg  *doc/generated/examples/colorMapPinkImg.pngcolorMapPinkImg  )doc/generated/examples/colorMapHotImg.pngcolorMapHotImg  ,doc/generated/examples/colorMapParulaImg.pngcolorMapParulaImg@Applies a GNU Octave/MATLAB equivalent colormap on a given imageJThe human perception isn t built for observing fine changes in grayscale images. Human eyes are more sensitive to observing changes between colors, so you often need to recolor your grayscale images to get a clue about them. OpenCV now comes with various colormaps to enhance the visualization in your computer vision application.Example: grayscaleImg :: forall (height :: Nat) (width :: Nat) depth . (height ~ 30, width ~ 256, depth ~ Word8) => Mat (ShapeT [height, width]) ('S 1) ('S depth) grayscaleImg = exceptError $ matFromFunc (Proxy :: Proxy [height, width]) (Proxy :: Proxy 1) (Proxy :: Proxy depth) grayscale where grayscale :: [Int] -> Int -> Word8 grayscale [_y, x] 0 = fromIntegral x grayscale _pos _channel = error "impossible" type ColorMapImg = Mat (ShapeT [30, 256]) ('S 3) ('S Word8) mkColorMapImg :: ColorMap -> ColorMapImg mkColorMapImg cmap = exceptError $ applyColorMap cmap grayscaleImg colorMapAutumImg :: ColorMapImg colorMapBoneImg :: ColorMapImg colorMapJetImg :: ColorMapImg colorMapWinterImg :: ColorMapImg colorMapRainbowImg :: ColorMapImg colorMapOceanImg :: ColorMapImg colorMapSummerImg :: ColorMapImg colorMapSpringImg :: ColorMapImg colorMapCoolImg :: ColorMapImg colorMapHsvImg :: ColorMapImg colorMapPinkImg :: ColorMapImg colorMapHotImg :: ColorMapImg colorMapParulaImg :: ColorMapImg colorMapAutumImg = mkColorMapImg ColorMapAutumn colorMapBoneImg = mkColorMapImg ColorMapBone colorMapJetImg = mkColorMapImg ColorMapJet colorMapWinterImg = mkColorMapImg ColorMapWinter colorMapRainbowImg = mkColorMapImg ColorMapRainbow colorMapOceanImg = mkColorMapImg ColorMapOcean colorMapSummerImg = mkColorMapImg ColorMapSummer colorMapSpringImg = mkColorMapImg ColorMapSpring colorMapCoolImg = mkColorMapImg ColorMapCool colorMapHsvImg = mkColorMapImg ColorMapHsv colorMapPinkImg = mkColorMapImg ColorMapPink colorMapHotImg = mkColorMapImg ColorMapHot colorMapParulaImg = mkColorMapImg ColorMapParula  'doc/generated/examples/grayscaleImg.png grayscaleImg Thttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/colormaps.html#applycolormapOpenCV Sphinx doc y     z { | } ~             y      z { | } ~  None+,9:;DLQRST[/Thickness of lines the contours are drawn with.&Draw the contour, filling in the area.-Normal size sans-serif font. Does not have a  variant. $doc/generated/FontHersheySimplex.pngFontHersheySimplexSmall size sans-serif font. "doc/generated/FontHersheyPlain.pngFontHersheyPlain *doc/generated/FontHersheyPlain_slanted.pngFontHersheyPlain0Normal size sans-serif font (more complex than ). Does not have a  variant. #doc/generated/FontHersheyDuplex.pngFontHersheyDuplexNormal size serif font. $doc/generated/FontHersheyComplex.pngFontHersheyComplex ,doc/generated/FontHersheyComplex_slanted.pngFontHersheyComplex*Normal size serif font (more complex than ). $doc/generated/FontHersheyTriplex.pngFontHersheyTriplex ,doc/generated/FontHersheyTriplex_slanted.pngFontHersheyTriplexSmaller version of . )doc/generated/FontHersheyComplexSmall.pngFontHersheyComplexSmall 1doc/generated/FontHersheyComplexSmall_slanted.pngFontHersheyComplexSmall)Hand-writing style font. Does not have a  variant. *doc/generated/FontHersheyScriptSimplex.pngFontHersheyScriptSimplexMore complex variant of . Does not have a  variant. *doc/generated/FontHersheyScriptComplex.pngFontHersheyScriptComplex$8-connected line. doc/generated/LineType_8.png8-connected line%4-connected line. doc/generated/LineType_4.png4-connected line&Antialiased line. doc/generated/LineType_AA.pngAntialised line'EDraws a arrow segment pointing from the first point to the second oneExample: arrowedLineImg :: Mat (ShapeT [200, 300]) ('S 4) ('S Word8) arrowedLineImg = exceptError $ withMatM (Proxy :: Proxy [200, 300]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do arrowedLine imgM (V2 10 130 :: V2 Int32) (V2 190 40 :: V2 Int32) blue 5 LineType_AA 0 0.15 arrowedLine imgM (V2 210 50 :: V2 Int32) (V2 250 180 :: V2 Int32) red 8 LineType_AA 0 0.4  )doc/generated/examples/arrowedLineImg.pngarrowedLineImg `http://docs.opencv.org/3.0.0/d6/d6e/group__imgproc__draw.html#ga0a165a3ca093fd488ac709fdf10c05b2OpenCV Doxygen doc Zhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#arrowedlineOpenCV Sphinx doc(Draws a circle.Example: rcircleImg :: Mat (ShapeT [200, 400]) ('S 4) ('S Word8) circleImg = exceptError $ withMatM (Proxy :: Proxy [200, 400]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do lift $ circle imgM (V2 100 100 :: V2 Int32) 90 blue 5 LineType_AA 0 lift $ circle imgM (V2 300 100 :: V2 Int32) 45 red (-1) LineType_AA 0  $doc/generated/examples/circleImg.png circleImg Uhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#circleOpenCV Sphinx doc)?Draws a simple or thick elliptic arc or fills an ellipse sectorExample: ellipseImg :: Mat (ShapeT [200, 400]) ('S 4) ('S Word8) ellipseImg = exceptError $ withMatM (Proxy :: Proxy [200, 400]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do lift $ ellipse imgM (V2 100 100 :: V2 Int32) (V2 90 60 :: V2 Int32) 30 0 360 blue 5 LineType_AA 0 lift $ ellipse imgM (V2 300 100 :: V2 Int32) (V2 80 40 :: V2 Int32) 160 40 290 red (-1) LineType_AA 0  %doc/generated/examples/ellipseImg.png ellipseImg Vhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#ellipseOpenCV Sphinx doc*Fills a convex polygon. The function *O draws a filled convex polygon. This function is much faster than the function + . It can fill not only convex polygons but any monotonic polygon without self-intersections, that is, a polygon whose contour intersects every horizontal line (scan line) twice at the most (though, its top-most and/or the bottom edge could be horizontal).Example: OfillConvexPolyImg :: forall (h :: Nat) (w :: Nat) . (h ~ 300, w ~ 300) => Mat (ShapeT [h, w]) ('S 4) ('S Word8) fillConvexPolyImg = exceptError $ withMatM (Proxy :: Proxy [h, w]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do lift $ fillConvexPoly imgM pentagon blue LineType_AA 0 where pentagon :: V.Vector (V2 Int32) pentagon = V.fromList [ V2 150 0 , V2 7 104 , V2 62 271 , V2 238 271 , V2 293 104 ]  ,doc/generated/examples/fillConvexPolyImg.pngfillConvexPolyImg ]http://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#fillconvexpolyOpenCV Sphinx doc+/Fills the area bounded by one or more polygons.Example: rookPts :: Int32 -> Int32 -> V.Vector (V.Vector (V2 Int32)) rookPts w h = V.singleton $ V.fromList [ V2 ( w `div` 4) ( 7*h `div` 8) , V2 ( 3*w `div` 4) ( 7*h `div` 8) , V2 ( 3*w `div` 4) (13*h `div` 16) , V2 ( 11*w `div` 16) (13*h `div` 16) , V2 ( 19*w `div` 32) ( 3*h `div` 8) , V2 ( 3*w `div` 4) ( 3*h `div` 8) , V2 ( 3*w `div` 4) ( h `div` 8) , V2 ( 26*w `div` 40) ( h `div` 8) , V2 ( 26*w `div` 40) ( h `div` 4) , V2 ( 22*w `div` 40) ( h `div` 4) , V2 ( 22*w `div` 40) ( h `div` 8) , V2 ( 18*w `div` 40) ( h `div` 8) , V2 ( 18*w `div` 40) ( h `div` 4) , V2 ( 14*w `div` 40) ( h `div` 4) , V2 ( 14*w `div` 40) ( h `div` 8) , V2 ( w `div` 4) ( h `div` 8) , V2 ( w `div` 4) ( 3*h `div` 8) , V2 ( 13*w `div` 32) ( 3*h `div` 8) , V2 ( 5*w `div` 16) (13*h `div` 16) , V2 ( w `div` 4) (13*h `div` 16) ] fillPolyImg :: forall (h :: Nat) (w :: Nat) . (h ~ 300, w ~ 300) => Mat (ShapeT [h, w]) ('S 4) ('S Word8) fillPolyImg = exceptError $ withMatM (Proxy :: Proxy [h, w]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do lift $ fillPoly imgM (rookPts w h) blue LineType_AA 0 where h = fromInteger $ natVal (Proxy :: Proxy h) w = fromInteger $ natVal (Proxy :: Proxy w)  &doc/generated/examples/fillPolyImg.png fillPolyImg Whttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#fillpolyOpenCV Sphinx doc,Draws several polygonal curvesExample: polylinesImg :: forall (h :: Nat) (w :: Nat) . (h ~ 300, w ~ 300) => Mat (ShapeT [h, w]) ('S 4) ('S Word8) polylinesImg = exceptError $ withMatM (Proxy :: Proxy [h, w]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do lift $ polylines imgM (rookPts w h) True blue 2 LineType_AA 0 where h = fromInteger $ natVal (Proxy :: Proxy h) w = fromInteger $ natVal (Proxy :: Proxy w)  'doc/generated/examples/polylinesImg.png polylinesImg Xhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#polylinesOpenCV Sphinx doc-+Draws a line segment connecting two points.Example: lineImg :: Mat (ShapeT [200, 300]) ('S 4) ('S Word8) lineImg = exceptError $ withMatM (Proxy :: Proxy [200, 300]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do lift $ line imgM (V2 10 130 :: V2 Int32) (V2 190 40 :: V2 Int32) blue 5 LineType_AA 0 lift $ line imgM (V2 210 50 :: V2 Int32) (V2 250 180 :: V2 Int32) red 8 LineType_AA 0  "doc/generated/examples/lineImg.pnglineImg Shttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#lineOpenCV Sphinx doc.=Calculates the size of a box that contains the specified text Zhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#gettextsizeOpenCV Sphinx doc/Draws a text string.The function putText renders the specified text string in the image. Symbols that cannot be rendered using the specified font are replaced by question marks.Example: putTextImg :: Mat ('S ['D, 'S 400]) ('S 4) ('S Word8) putTextImg = exceptError $ withMatM (height ::: (Proxy :: Proxy 400) ::: Z) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do forM_ (zip [0..] [minBound .. maxBound]) $ \(n, fontFace) -> lift $ putText imgM (T.pack $ show fontFace) (V2 10 (35 + n * 30) :: V2 Int32) (Font fontFace NotSlanted 1.0) black 1 LineType_AA False where height :: Int32 height = 50 + fromIntegral (30 * fromEnum (maxBound :: FontFace))  %doc/generated/examples/putTextImg.png putTextImg Vhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#puttextOpenCV Sphinx doc03Draws a simple, thick, or filled up-right rectangleExample: rectangleImg :: Mat (ShapeT [200, 400]) ('S 4) ('S Word8) rectangleImg = exceptError $ withMatM (Proxy :: Proxy [200, 400]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do lift $ rectangle imgM (toRect $ HRect (V2 10 10) (V2 180 180)) blue 5 LineType_8 0 lift $ rectangle imgM (toRect $ HRect (V2 260 30) (V2 80 140)) red (-1) LineType_8 0  'doc/generated/examples/rectangleImg.png rectangleImg Xhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/drawing_functions.html#rectangleOpenCV Sphinx doc1!Draw contours onto a black image.Example: zflowerContours :: Mat ('S ['S 512, 'S 768]) ('S 3) ('S Word8) flowerContours = exceptError $ withMatM (Proxy :: Proxy [512,768]) (Proxy :: Proxy 3) (Proxy :: Proxy Word8) black $ \imgM -> do edges <- thaw $ exceptError $ cvtColor bgr gray flower_768x512 >>= canny 30 20 Nothing CannyNormL1 contours <- findContours ContourRetrievalList ContourApproximationSimple edges lift $ drawContours (V.map contourPoints contours) red (OutlineContour LineType_AA 1) imgM  )doc/generated/examples/flowerContours.pngflowerContours24Draws a marker on a predefined position in an image.0The marker will be drawn as as a 20-pixel cross.Example: markerImg :: Mat (ShapeT [100, 100]) ('S 4) ('S Word8) markerImg = exceptError $ withMatM (Proxy :: Proxy [100, 100]) (Proxy :: Proxy 4) (Proxy :: Proxy Word8) transparent $ \imgM -> do lift $ marker imgM (50 :: V2 Int32) blue  $doc/generated/examples/markerImg.png markerImgA  !"#$%& 'Image. The point the arrow starts from.The point the arrow points to. Line color.Line thickness.3Number of fractional bits in the point coordinates.<The length of the arrow tip in relation to the arrow length.( Image where the circle is drawn.Center of the circle.Radius of the circle. Circle color.kThickness of the circle outline, if positive. Negative thickness means that a filled circle is to be drawn.Type of the circle boundary.SNumber of fractional bits in the coordinates of the center and in the radius value.) Image.Center of the ellipse.*Half of the size of the ellipse main axes."Ellipse rotation angle in degrees..Starting angle of the elliptic arc in degrees.,Ending angle of the elliptic arc in degrees.Ellipse color.{Thickness of the ellipse arc outline, if positive. Otherwise, this indicates that a filled ellipse sector is to be drawn.Type of the ellipse boundary. NNumber of fractional bits in the coordinates of the center and values of axes.*Image.Polygon vertices.Polygon color.4Number of fractional bits in the vertex coordinates.+Image. Polygons.Polygon color.4Number of fractional bits in the vertex coordinates.,Image. Vertices.Flag indicating whether the drawn polylines are closed or not. If they are closed, the function draws a line from the last vertex of each curve to its first vertex. Thickness of the polyline edges.4Number of fractional bits in the vertex coordinates.-Image. First point of the line segment. Scond point of the line segment. Line color.Line thickness.3Number of fractional bits in the point coordinates..+Thickness of lines used to render the text.(size, baseLine) = (The size of a box that contains the specified text. , y-coordinate of the baseline relative to the bottom-most text point)/Image.Text string to be drawn.3Bottom-left corner of the text string in the image. Text color.+Thickness of the lines used to draw a text.When  ^, the image data origin is at the bottom-left corner. Otherwise, it is at the top-left corner.0Image.0Rectangle color or brightness (grayscale image).Line thickness.3Number of fractional bits in the point coordinates. 1Color of the contours.Image.2 The image to draw the marker on.,The point where the crosshair is positioned. Line color.$ !"#$%&'()*+,-./012$#$%& !"'()*+,-./012.  !"#$%& '()*+,-./0 12None+,2349:;<=DLQRST[@'Harris detector and it free k parameterFSA flag, indicating whether to use the more accurate L2 norm or the default L1 norm.I"Finds edges in an image using the  Mhttp://docs.opencv.org/2.4/modules/imgproc/doc/feature_detection.html#canny86Canny86 algorithm.Example: cannyImg :: forall shape channels depth . (Mat shape channels depth ~ Lambda) => Mat shape ('S 1) depth cannyImg = exceptError $ canny 30 200 Nothing CannyNormL1 lambda  #doc/generated/examples/cannyImg.pngcannyImgJ&Determines strong corners on an image.\The function finds the most prominent corners in the image or in the specified image region.wFunction calculates the corner quality measure at every source image pixel using the cornerMinEigenVal or cornerHarris.dFunction performs a non-maximum suppression (the local maximums in 3 x 3 neighborhood are retained).2The corners with the minimal eigenvalue less than .֚֞֊֢֕֒֝{֎֟֎֕ * max(x,y) qualityMeasureMap(x,y) are rejected.PThe remaining corners are sorted by the quality measure in the descending order.jFunction throws away each corner for which there is a stronger corner at a distance less than maxDistance.Example: goodFeaturesToTrackTraces :: forall (width :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . (Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Frog) => Mat (ShapeT [height, width]) ('S channels) ('S depth) goodFeaturesToTrackTraces = exceptError $ do imgG <- cvtColor bgr gray frog let features = goodFeaturesToTrack imgG 20 0.01 0.5 Nothing Nothing CornerMinEigenVal withMatM (Proxy :: Proxy [height, width]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do void $ matCopyToM imgM (V2 0 0) frog Nothing forM_ features $ \f -> do circle imgM (round <$> f :: V2 Int32) 2 blue 5 LineType_AA 0  4doc/generated/examples/goodFeaturesToTrackTraces.pnggoodFeaturesToTrackTracesKTFinds circles in a grayscale image using a modification of the Hough transformation.Example:  houghCircleTraces :: forall (width :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . (Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Circles_1000x625) => Mat (ShapeT [height, width]) ('S channels) ('S depth) houghCircleTraces = exceptError $ do imgG <- cvtColor bgr gray circles_1000x625 let circles = houghCircles 1 10 Nothing Nothing Nothing Nothing imgG withMatM (Proxy :: Proxy [height, width]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do void $ matCopyToM imgM (V2 0 0) circles_1000x625 Nothing forM_ circles $ \c -> do circle imgM (round <$> circleCenter c :: V2 Int32) (round (circleRadius c)) blue 1 LineType_AA 0  ,doc/generated/examples/houghCircleTraces.pnghoughCircleTracesLExample: houghLinesPTraces :: forall (width :: Nat) (height :: Nat) (channels :: Nat) (depth :: * ) . (Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Building_868x600) => Mat (ShapeT [height, width]) ('S channels) ('S depth) houghLinesPTraces = exceptError $ do edgeImg <- canny 50 200 Nothing CannyNormL1 building_868x600 edgeImgBgr <- cvtColor gray bgr edgeImg withMatM (Proxy :: Proxy [height, width]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do edgeImgM <- thaw edgeImg lineSegments <- houghLinesP 1 (pi / 180) 80 (Just 30) (Just 10) edgeImgM void $ matCopyToM imgM (V2 0 0) edgeImgBgr Nothing forM_ lineSegments $ \lineSegment -> do line imgM (lineSegmentStart lineSegment) (lineSegmentStop lineSegment) red 2 LineType_8 0  ,doc/generated/examples/houghLinesPTraces.pnghoughLinesPTraces ;<=>?@ABCDEFGHI-First threshold for the hysteresis procedure..Second threshold for the hysteresis procedure.Aperture size for the Sobel()) operator. If not specified defaults to 3. Must be 3, 5 or 7.SA flag, indicating whether to use the more accurate L2 norm or the default L1 norm.8-bit input image.J;Input 8-bit or floating-point 32-bit, single-channel image.rMaximum number of corners to return. If there are more corners than are found, the strongest of them is returned.Parameter characterizing the minimal accepted quality of image corners. The parameter value is multiplied by the best corner quality measure, which is the minimal eigenvalue (see cornerMinEigenVal ) or the Harris function response (see cornerHarris ). The corners with the quality measure less than the product are rejected. For example, if the best corner has the quality measure = 1500, and the qualityLevel=0.01 , then all the corners with the quality measure less than 15 are rejected.AMinimum possible Euclidean distance between the returned corners.Optional region of interest. If the image is not empty (it needs to have the type CV_8UC1 and the same size as image ), it specifies the region in which the corners are detected.Size of an average block for computing a derivative covariation matrix over each pixel neighborhood. See cornerEigenValsAndVecs._Parameter indicating whether to use a Harris detector (see cornerHarris) or cornerMinEigenVal.KVInverse ratio of the accumulator resolution to the image resolution. For example, if dp=1B, the accumulator has the same resolution as the input image. If dp=23, the accumulator has half as big width and height.Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed..The higher threshold of the two passed to the IA edge detector (the lower one is twice smaller). Default is 100.The accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. Default is 100.Minimum circle radius.Maximum circle radius.L1Distance resolution of the accumulator in pixels./Angle resolution of the accumulator in radians.dAccumulator threshold parameter. Only those lines are returned that get enough votes (> threshold).BMinimum line length. Line segments shorter than that are rejected.AMaximum allowed gap between points on the same line to link them..Source image. May be modified by the function.M;<=>?@ABCDEFGHIJKLIJKL?@AFGHBCDE;<=> ;<=>?@ABCDEFGHIJKLMNone+,9:;DLQRST[ YConnectivity value. The default value of 4 means that only the four nearest neighbor pixels (those that share an edge) are considered. A connectivity value of 8 means that the eight nearest neighbor pixels (those that share a corner) will be considered.ZMValue between 1 and 255 with which to fill the mask (the default value is 1).[If set, the difference between the current pixel and seed pixel is considered. Otherwise, the difference between neighbor pixels is considered (that is, the range is floating).\If set, the function does not change the image ( newVal is ignored), and only fills the mask with the value specified in bits 8-16 of flags as described above. This option only make sense in function variants that have the mask parameter.]1Converts an image from one color space to another;The function converts an input image from one color space to another. In case of a transformation to-from RGB color space, the order of the channels should be specified explicitly (RGB or BGR). Note that the default color format in OpenCV is often referred to as RGB but it is actually BGR (the bytes are reversed). So the first byte in a standard (24-bit) color image will be an 8-bit Blue component, the second byte will be Green, and the third byte will be Red. The fourth, fifth, and sixth bytes would then be the second pixel (Blue, then Green, then Red), and so on.;The conventional ranges for R, G, and B channel values are: 0 to 255 for   images0 to 65535 for   images 0 to 1 for   imagesIn case of linear transformations, the range does not matter. But in case of a non-linear transformation, an input RGB image should be normalized to the proper value range to get the correct results, for example, for RGB to L*u*v* transformation. For example, if you have a 32-bit floating-point image directly converted from an 8-bit image without any scaling, then it will have the 0..255 value range instead of 0..1 assumed by the function. So, before calling ]), you need first to scale the image down: * cvtColor (img * 1/255) 'ColorConvBGR2Luv' If you use ] with 8-bit images, the conversion will have some information lost. For many applications, this will not be noticeable but it is recommended to use 32-bit images in applications that need the full range of colors or that convert an image before an operation and then convert back.pIf conversion adds the alpha channel, its value will set to the maximum of corresponding channel range: 255 for   , 65535 for  , 1 for  .Example: cvtColorImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Birds_512x341 , width2 ~ (width + width) ) => Mat (ShapeT [height, width2]) ('S channels) ('S depth) cvtColorImg = exceptError $ withMatM ((Proxy :: Proxy height) ::: (Proxy :: Proxy width2) ::: Z) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do birds_gray <- pureExcept $ cvtColor gray bgr =<< cvtColor bgr gray birds_512x341 matCopyToM imgM (V2 0 0) birds_512x341 Nothing matCopyToM imgM (V2 w 0) birds_gray Nothing lift $ arrowedLine imgM (V2 startX midY) (V2 pointX midY) red 4 LineType_8 0 0.15 where h, w :: Int32 h = fromInteger $ natVal (Proxy :: Proxy height) w = fromInteger $ natVal (Proxy :: Proxy width) startX, pointX :: Int32 startX = round $ fromIntegral w * (0.95 :: Double) pointX = round $ fromIntegral w * (1.05 :: Double) midY = h `div` 2  &doc/generated/examples/cvtColorImg.png cvtColorImg http://goo.gl/3rfrhuOpenCV Sphinx Doc^ The function ^S fills a connected component starting from the seed point with the specified color.The connectivity is determined by the color/brightness closeness of the neighbor pixels. See the OpenCV documentation for details on the algorithm.Example: floodFillImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Sailboat_768x512 , width2 ~ (width + width) ) => Mat (ShapeT [height, width2]) ('S channels) ('S depth) floodFillImg = exceptError $ withMatM ((Proxy :: Proxy height) ::: (Proxy :: Proxy width2) ::: Z) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do sailboatEvening_768x512 <- thaw sailboat_768x512 mask <- mkMatM (Proxy :: Proxy [height + 2, width + 2]) (Proxy :: Proxy 1) (Proxy :: Proxy Word8) black circle mask (V2 450 120 :: V2 Int32) 45 white (-1) LineType_AA 0 rect <- floodFill sailboatEvening_768x512 (Just mask) seedPoint eveningRed (Just tolerance) (Just tolerance) defaultFloodFillOperationFlags rectangle sailboatEvening_768x512 rect blue 2 LineType_8 0 frozenSailboatEvening_768x512 <- freeze sailboatEvening_768x512 matCopyToM imgM (V2 0 0) sailboat_768x512 Nothing matCopyToM imgM (V2 w 0) frozenSailboatEvening_768x512 Nothing lift $ arrowedLine imgM (V2 startX midY) (V2 pointX midY) red 4 LineType_8 0 0.15 where h, w :: Int32 h = fromInteger $ natVal (Proxy :: Proxy height) w = fromInteger $ natVal (Proxy :: Proxy width) startX, pointX :: Int32 startX = round $ fromIntegral w * (0.95 :: Double) pointX = round $ fromIntegral w * (1.05 :: Double) midY = h `div` 2 seedPoint :: V2 Int32 seedPoint = V2 100 50 eveningRed :: V4 Double eveningRed = V4 0 100 200 255 tolerance :: V4 Double tolerance = pure 7  'doc/generated/examples/floodFillImg.png floodFillImg http://goo.gl/9XIIneOpenCV Sphinx Doc`5Applies a fixed-level threshold to each array element?The function applies fixed-level thresholding to a single-channel array. The function is typically used to get a bi-level (binary) image out of a grayscale image or for removing a noise, that is, filtering out pixels with too small or too large values. There are several types of thresholding supported by the function.Example: }grayBirds :: Mat (ShapeT [341, 512]) ('S 1) ('S Word8) grayBirds = exceptError $ cvtColor bgr gray birds_512x341 threshBinaryBirds :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) threshBinaryBirds = exceptError $ cvtColor gray bgr $ fst $ exceptError $ threshold (ThreshVal_Abs 100) (Thresh_Binary 150) grayBirds threshBinaryInvBirds :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) threshBinaryInvBirds = exceptError $ cvtColor gray bgr $ fst $ exceptError $ threshold (ThreshVal_Abs 100) (Thresh_BinaryInv 150) grayBirds threshTruncateBirds :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) threshTruncateBirds = exceptError $ cvtColor gray bgr $ fst $ exceptError $ threshold (ThreshVal_Abs 100) Thresh_Truncate grayBirds threshToZeroBirds :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) threshToZeroBirds = exceptError $ cvtColor gray bgr $ fst $ exceptError $ threshold (ThreshVal_Abs 100) Thresh_ToZero grayBirds threshToZeroInvBirds :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) threshToZeroInvBirds = exceptError $ cvtColor gray bgr $ fst $ exceptError $ threshold (ThreshVal_Abs 100) Thresh_ToZeroInv grayBirds  ,doc/generated/examples/threshBinaryBirds.pngthreshBinaryBirds  /doc/generated/examples/threshBinaryInvBirds.pngthreshBinaryInvBirds  .doc/generated/examples/threshTruncateBirds.pngthreshTruncateBirds  ,doc/generated/examples/threshToZeroBirds.pngthreshToZeroBirds /doc/generated/examples/threshToZeroInvBirds.pngthreshToZeroInvBirds dhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/miscellaneous_transformations.html#thresholdOpenCV Sphinx docaIPerforms a marker-based image segmentation using the watershed algorithm.The function implements one of the variants of watershed, non-parametric marker-based segmentation algorithm, described in [Meyer, F. Color Image Segmentation, ICIP92, 1992].0Before passing the image to the function, you have to roughly outline the desired regions in the image markers with positive (>0) indices. So, every region is represented as one or more connected components with the pixel values 1, 2, 3, and so on. Such markers can be retrieved from a binary mask using  findContours and  drawContoursO. The markers are seeds  of the future image regions. All the other pixels in markers , whose relation to the outlined regions is not known and should be defined by the algorithm, should be set to 0 s. In the function output, each pixel in markers is set to a value of the seed  components or to -1 at boundaries between the regions. dhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/miscellaneous_transformations.html#watershedOpenCV Sphinx docb Runs the  bhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/miscellaneous_transformations.html#grabcutGrabCut algorithm.Example: ]grabCutBird :: Birds_512x341 grabCutBird = exceptError $ do mask <- withMatM (Proxy :: Proxy [341, 512]) (Proxy :: Proxy 1) (Proxy :: Proxy Word8) black $ \mask -> do fgTmp <- mkMatM (Proxy :: Proxy [1, 65]) (Proxy :: Proxy 1) (Proxy :: Proxy Double) black bgTmp <- mkMatM (Proxy :: Proxy [1, 65]) (Proxy :: Proxy 1) (Proxy :: Proxy Double) black grabCut birds_512x341 mask fgTmp bgTmp 5 (GrabCut_InitWithRect rect) mask' <- matScalarCompare mask 3 Cmp_Ge withMatM (Proxy :: Proxy [341, 512]) (Proxy :: Proxy 3) (Proxy :: Proxy Word8) transparent $ \imgM -> do matCopyToM imgM (V2 0 0) birds_512x341 (Just mask') where rect :: Rect Int32 rect = toRect $ HRect { hRectTopLeft = V2 264 60, hRectSize = V2 248 281 }  &doc/generated/examples/grabCutBird.png grabCutBirdc=Returns 0 if the pixels are not in the range, 255 otherwise.  WXYZ[\] Convert from &. Make sure the source image has this  Convert to . Source image^Input/output 1- or 3-channel, 8-bit, or floating-point image. It is modified by the function unless the FLOODFILL_MASK_ONLY flag is set.Operation mask that should be a single-channel 8-bit image, 2 pixels wider and 2 pixels taller than image. Since this is both an input and output parameter, you must take responsibility of initializing it. Flood-filling cannot go across non-zero pixels in the input mask. For example, an edge detector output can be used as a mask to stop filling at edges. On output, pixels in the mask corresponding to filled pixels in the image are set to 1 or to the a value specified in flags as described below. It is therefore possible to use the same mask in multiple calls to the function to make sure the filled areas do not overlap. Note: Since the mask is larger than the filled image, a pixel (x, y) in image corresponds to the pixel (x+1, y+1) in the mask.Starting point.)New value of the repainted domain pixels.Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. Zero by default.Maximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. Zero by default._ `aInput 8-bit 3-channel image9Input/output 32-bit single-channel image (map) of markersbInput 8-bit 3-channel image.Input/output 8-bit single-channel mask. The mask is initialized by the function when mode is set to GC_INIT_WITH_RECT. Its elements may have one of following values:,GC_BGD defines an obvious background pixels.4GC_FGD defines an obvious foreground (object) pixel..GC_PR_BGD defines a possible background pixel..GC_PR_FGD defines a possible foreground pixel.cTemporary array for the background model. Do not modify it while you are processing the same image.dTemporary arrays for the foreground model. Do not modify it while you are processing the same image.Number of iterations the algorithm should make before returning the result. Note that the result can be refined with further calls with mode==GC_INIT_WITH_MASK or mode==GC_EVAL.Operation modec Lower bound Upper bound      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghiWXYZ[\]^_`abc]^WXYZ[\_`abc WXYZ[\]^_ `abcNone+,9:;DLQRST[d"Whether to use normalisation. See g.g [http://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/object_detection.html#matchtemplateOpenCV Sphinx doch not normed: ]http://docs.opencv.org/3.0-last-rst/_images/math/f096a706cb9499736423f10d901c7fe13a1e6926.pngnormed: ]http://docs.opencv.org/3.0-last-rst/_images/math/6d6a720237b3a4c1365c8e86a9cfcf0895d5e265.pngi not normed: ]http://docs.opencv.org/3.0-last-rst/_images/math/93f1747a86a3c5095a0e6a187442c6e2a0ae0968.pngnormed: ]http://docs.opencv.org/3.0-last-rst/_images/math/6a72ad9ae17c4dad88e33ed16308fc1cfba549b8.pngj not normed: ]http://docs.opencv.org/3.0-last-rst/_images/math/c9b62df96d0692d90cc1d8a5912a68a44461910c.pngwhere ]http://docs.opencv.org/3.0-last-rst/_images/math/ffb6954b6020b02e13b73c79bd852c1627cfb79c.pngnormed: ]http://docs.opencv.org/3.0-last-rst/_images/math/235e42ec68d2d773899efcf0a4a9d35a7afedb64.pngk [http://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/object_detection.html#matchtemplateOpenCV Sphinx doc5Compares a template against overlapped image regions.MThe function slides through image, compares the overlapped patches of size  ]http://docs.opencv.org/3.0-last-rst/_images/math/d47153257f0243694e5632bb23b85009eb9e5599.png w times h against templ using the specified method and stores the comparison results in result . Here are the formulae for the available comparison methods ( ]http://docs.opencv.org/3.0-last-rst/_images/math/06f9f0fcaa8d96a6a23b0f7d1566fe5efaa789ad.pngI denotes image,  ]http://docs.opencv.org/3.0-last-rst/_images/math/87804527283a4539e1e17c5861df8cb92a97fd6d.pngT template,  ]http://docs.opencv.org/3.0-last-rst/_images/math/8fa391da5431a5d6eaba1325c3e7cb3da22812b5.pngRH result). The summation is done over template and/or the image patch: ]http://docs.opencv.org/3.0-last-rst/_images/math/ff90cafd4a71d85875237787b54815ee8ac77bff.pngx' = 0...w-1, y' = 0...h-1 defghij kMImage where the search is running. It must be 8-bit or 32-bit floating-point.\Searched template. It must be not greater than the source image and have the same data type.+Parameter specifying the comparison method. NormaliseZMap of comparison results. It must be single-channel 32-bit floating-point. If image is  ]http://docs.opencv.org/3.0-last-rst/_images/math/e4926c3d97c3f7434c6317ba24b8b9294a0aba64.png and templ is  ]http://docs.opencv.org/3.0-last-rst/_images/math/d47153257f0243694e5632bb23b85009eb9e5599.png , then result is  ]http://docs.opencv.org/3.0-last-rst/_images/math/e318d7237b57e08135e689fd9136b9ac8e4a4102.png.defghijkghijdefk defghij kNone+,9:;DLQRST[ tKStores absolutely all the contour points. That is, any 2 subsequent points (x1,y1) and (x2,y2)T of the contour will be either horizontal, vertical or diagonal neighbors, that is, max(abs(x1-x2),abs(y2-y1)) == 1.uCompresses horizontal, vertical, and diagonal segments and leaves only their end points. For example, an up-right rectangular contour is encoded with 4 points.y*Retrieves only the extreme outer contours.zRRetrieves all of the contours without establishing any hierarchical relationships.{-Retrieves all of the contours and organizes them into a two-level hierarchy. At the top level, there are external boundaries of the components. At the second level, there are boundaries of the holes. If there is another contour inside a hole of a connected component, it is still put at the top level.|SRetrieves all of the contours and reconstructs a full hierarchy of nested contours.}Oriented area flag.~Return a signed area value, depending on the contour orientation (clockwise or counter-clockwise). Using this feature you can determine orientation of a contour by taking the sign of an area.%Return the area as an absolute value.Calculates a contour area.3The function computes a contour area. Similarly to moments!, the area is computed using the  /https://en.wikipedia.org/wiki/Green%27s_theorem Green formula[. Thus, the returned area and the number of non-zero pixels, if you draw the contour using  drawContours or fillPolyu, can be different. Also, the function will most certainly give a wrong results for contours with self-intersections. http://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=contourarea#cv2.contourAreaOpenCV Sphinx doc!Performs a point-in-contour test.The function determines whether the point is inside a contour, outside, or lies on an edge (or coincides with a vertex). It returns positive (inside), negative (outside), or zero (on an edge) value, correspondingly. When measureDist=false , the return value is +1, -1, and 0, respectively. Otherwise, the return value is a signed distance between the point and the nearest contour edge. whttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html#pointpolygontestOpenCV Sphinx doc?Approximates a polygonal curve(s) with the specified precision.The functions approxPolyDP approximate a curve or a polygon with another curve/polygon with less vertices so that the distance between them is less or equal to the specified precision. It uses the <http://en.wikipedia.org/wiki/Ramer-Douglas-Peucker_algorithmDouglas-Peucker algorithm http://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=contourarea#approxpolydp) opqrstuvwxyz{|}~-Input vector of 2D points (contour vertices).Signed or unsigned areaContour.!Point tested against the contour.If true, the function estimates the signed distance from the point to the nearest contour edge. Otherwise, the function only checks if the point is inside a contour or not. epsilon is closed is closedopqrstuvwxyz{|}~opqr}~xyz{|stuvw opqrstuvwxyz{|}~ None+,9:;DLQRST[  1D example: iiiiii|abcdefgh|iiiiiii with some specified i 1D example: aaaaaa|abcdefgh|hhhhhhh 1D example: fedcba|abcdefgh|hgfedcb 1D example: cdefgh|abcdefgh|abcdefg 1D example: gfedcb|abcdefgh|gfedcba 1D example: uvwxyz|absdefgh|ijklmnodo not look outside of ROINearest neighbor interpolation.Bilinear interpolation.Bicubic interpolation.Resampling using pixel area relation. It may be a preferred method for image decimation, as it gives moire'-free results. But when the image is zoomed, it is similar to the  method.+Lanczos interpolation over 8x8 neighborhoodENone+,9:;DLQRST[     None+,9:;DLQRST[FNone+,9:;DLQRST[   None+,9:;DLQRST[ Resize to an absolute size.?Resize with relative factors for both the width and the height.Resizes an image5To shrink an image, it will generally look best with N interpolation, whereas to enlarge an image, it will generally look best with  (slow) or  (faster but still looks OK).Example: resizeInterAreaImg :: Mat ('S ['D, 'D]) ('S 3) ('S Word8) resizeInterAreaImg = exceptError $ withMatM (h ::: w + (w `div` 2) ::: Z) (Proxy :: Proxy 3) (Proxy :: Proxy Word8) transparent $ \imgM -> do birds_resized <- pureExcept $ resize (ResizeRel $ pure 0.5) InterArea birds_768x512 matCopyToM imgM (V2 0 0) birds_768x512 Nothing matCopyToM imgM (V2 w 0) birds_resized Nothing lift $ arrowedLine imgM (V2 startX y) (V2 pointX y) red 4 LineType_8 0 0.15 where [h, w] = miShape $ matInfo birds_768x512 startX = round $ fromIntegral w * (0.95 :: Double) pointX = round $ fromIntegral w * (1.05 :: Double) y = h `div` 4  -doc/generated/examples/resizeInterAreaImg.pngresizeInterAreaImg ]http://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/geometric_transformations.html#resizeOpenCV Sphinx doc,Applies an affine transformation to an imageExample: rotateBirds :: Mat (ShapeT [2, 3]) ('S 1) ('S Double) rotateBirds = getRotationMatrix2D (V2 256 170 :: V2 CFloat) 45 0.75 warpAffineImg :: Birds_512x341 warpAffineImg = exceptError $ warpAffine birds_512x341 rotateBirds InterArea False False (BorderConstant black) warpAffineInvImg :: Birds_512x341 warpAffineInvImg = exceptError $ warpAffine warpAffineImg rotateBirds InterCubic True False (BorderConstant black)  doc/generated/birds_512x341.pngoriginal  (doc/generated/examples/warpAffineImg.png warpAffineImg +doc/generated/examples/warpAffineInvImg.pngwarpAffineInvImg ahttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/geometric_transformations.html#warpaffineOpenCV Sphinx doc0Applies a perspective transformation to an image fhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/geometric_transformations.html#warpperspectiveOpenCV Sphinx doc Inverts an affine transformation lhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/geometric_transformations.html#invertaffinetransformOpenCV Sphinx docKCalculates a perspective transformation matrix for 2D perspective transform nhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/geometric_transformations.html#getperspectivetransformOpenCV Sphinx doc*Calculates an affine matrix of 2D rotation jhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/geometric_transformations.html#getrotationmatrix2dOpenCV Sphinx doc9Applies a generic geometrical transformation to an image.GThe function remap transforms the source image using the specified map: dst(x,y) = src(map(x,y))Example: remapImg :: forall (width :: Nat) (height :: Nat) (channels :: Nat) (depth :: * ) . (Mat ('S ['S height, 'S width]) ('S channels) ('S depth) ~ Birds_512x341) => Mat ('S ['S height, 'S width]) ('S channels) ('S depth) remapImg = exceptError $ remap birds_512x341 transform InterLinear (BorderConstant black) where transform = exceptError $ matFromFunc (Proxy :: Proxy [height, width]) (Proxy :: Proxy 2) (Proxy :: Proxy Float) exampleFunc exampleFunc [_y, x] 0 = wobble x w exampleFunc [ y, _x] 1 = wobble y h exampleFunc _pos _channel = error "impossible" wobble :: Int -> Float -> Float wobble v s = let v' = fromIntegral v n = v' / s in v' + (s * 0.05 * sin (n * 2 * pi * 5)) w = fromInteger $ natVal (Proxy :: Proxy width) h = fromInteger $ natVal (Proxy :: Proxy height)  doc/generated/birds_512x341.pngoriginal #doc/generated/examples/remapImg.pngremapImg \http://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/geometric_transformations.html#remapOpenCV documentationUThe function transforms an image to compensate radial and tangential lens distortion.Those pixels in the destination image, for which there is no correspondent pixels in the source image, are filled with zeros (black color).HThe camera matrix and the distortion parameters can be determined using calibrateCamera . If the resolution of images is different from the resolution used at the calibration stage, f_x, f_y, c_x and c_y need to be scaled accordingly, while the distortion coefficients remain the same.Example: undistortImg :: forall (width :: Nat) (height :: Nat) (channels :: Nat) (depth :: * ) . (Mat ('S ['S height, 'S width]) ('S channels) ('S depth) ~ Birds_512x341) => Mat ('S ['S height, 'S width]) ('S channels) ('S depth) undistortImg = undistort birds_512x341 intrinsics coefficients where intrinsics :: M33 Float intrinsics = V3 (V3 15840.8 0 2049) (V3 0 15830.3 1097) (V3 0 0 1) coefficients :: Matx51d coefficients = unsafePerformIO $ newMatx51d (-2.239145913492247) 13.674526561736648 3.650187848850095e-2 (-2.0042015752853796e-2) (-0.44790921357620456)  doc/generated/birds_512x341.pngoriginal 'doc/generated/examples/undistortImg.png undistortImg    Source image.Affine transformation matrix.#Perform the inverse transformation.Fill outliers.Pixel extrapolation method.Transformed source image. Source image."Perspective transformation matrix.#Perform the inverse transformation.Fill outliers.Pixel extrapolation method.Transformed source image.HArray of 4 floating-point Points representing 4 vertices in source imageMArray of 4 floating-point Points representing 4 vertices in destination imageAThe output perspective transformation, 3x3 floating-point-matrix.+Center of the rotation in the source image.Rotation angle in degrees. Positive values mean counter-clockwise rotation (the coordinate origin is assumed to be the top-left corner).Isotropic scale factor.<The output affine transformation, 2x3 floating-point matrix. Source image. A map of (x, y) points.'Interpolation method to use. Note that $ is not supported by this function.The source image to undistort.'The 3x3 matrix of intrinsic parameters.qThe distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,x,y]]]]) of 4, 5, 8, 12 or 14 elements.     None+,9:;DLQRST[$An opening operation: dilate . erode#A closing operation: erode . dilate(A morphological gradient: dilate - erode"top hat": src - open"black hat": close - src"A rectangular structuring element.An elliptic structuring element, that is, a filled ellipse inscribed into the rectangle Rect(0, 0, esize.width, 0.esize.height).#A cross-shaped structuring element.$Calculates the Laplacian of an imageThe function calculates the Laplacian of the source image by adding up the second x and y derivatives calculated using the Sobel operator.Example: laplacianImg :: forall shape channels depth . (Mat shape channels depth ~ Birds_512x341) => Mat shape ('S 1) ('S Double) laplacianImg = exceptError $ do imgG <- cvtColor bgr gray birds_512x341 laplacian Nothing Nothing Nothing Nothing imgG  'doc/generated/examples/laplacianImg.png laplacianImg Phttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/filtering.html#laplacianOpenCV Sphinx doc&Blurs an image using the median filterExample: 8medianBlurImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Birds_512x341 , width2 ~ ((*) width 2) -- TODO (RvD): HSE parse error with infix type operator ) => Mat (ShapeT [height, width2]) ('S channels) ('S depth) medianBlurImg = exceptError $ withMatM (Proxy :: Proxy [height, width2]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do birdsBlurred <- pureExcept $ medianBlur birds_512x341 13 matCopyToM imgM (V2 0 0) birds_512x341 Nothing matCopyToM imgM (V2 w 0) birdsBlurred Nothing where w = fromInteger $ natVal (Proxy :: Proxy width)  (doc/generated/examples/medianBlurImg.png medianBlurImg Qhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/filtering.html#medianblurOpenCV Sphinx doc"Blurs an image using a box filter.Example: @boxBlurImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Birds_512x341 , width2 ~ ((*) width 2) -- TODO (RvD): HSE parse error with infix type operator ) => Mat (ShapeT [height, width2]) ('S channels) ('S depth) boxBlurImg = exceptError $ withMatM (Proxy :: Proxy [height, width2]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do birdsBlurred <- pureExcept $ blur (V2 13 13 :: V2 Int32) birds_512x341 matCopyToM imgM (V2 0 0) birds_512x341 Nothing matCopyToM imgM (V2 w 0) birdsBlurred Nothing where w = fromInteger $ natVal (Proxy :: Proxy width)  %doc/generated/examples/boxBlurImg.png boxBlurImg Khttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/filtering.html#blurOpenCV Sphinx doc7Erodes an image by using a specific structuring elementExample: MerodeImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Lambda , width2 ~ ((*) width 2) -- TODO (RvD): HSE parse error with infix type operator ) => Mat (ShapeT [height, width2]) ('S channels) ('S depth) erodeImg = exceptError $ withMatM (Proxy :: Proxy [height, width2]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do erodedLambda <- pureExcept $ erode lambda Nothing (Nothing :: Maybe Point2i) 5 BorderReplicate matCopyToM imgM (V2 0 0) lambda Nothing matCopyToM imgM (V2 w 0) erodedLambda Nothing where w = fromInteger $ natVal (Proxy :: Proxy width)  #doc/generated/examples/erodeImg.pngerodeImg Lhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/filtering.html#erodeOpenCV Sphinx doc#Convolves an image with the kernel.Example: gfilter2DImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Birds_512x341 , width2 ~ ((*) width 2) -- TODO (RvD): HSE parse error with infix type operator ) => Mat (ShapeT [height, width2]) ('S channels) ('S depth) filter2DImg = exceptError $ withMatM (Proxy :: Proxy [height, width2]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do filteredBird <- pureExcept $ filter2D birds_512x341 kernel (Nothing :: Maybe Point2i) 0 BorderReplicate matCopyToM imgM (V2 0 0) birds_512x341 Nothing matCopyToM imgM (V2 w 0) filteredBird Nothing where w = fromInteger $ natVal (Proxy :: Proxy width) kernel = exceptError $ withMatM (Proxy :: Proxy [3, 3]) (Proxy :: Proxy 1) (Proxy :: Proxy Double) black $ \imgM -> do lift $ line imgM (V2 0 0 :: V2 Int32) (V2 0 0 :: V2 Int32) (V4 (-2) (-2) (-2) 1 :: V4 Double) 0 LineType_8 0 lift $ line imgM (V2 1 0 :: V2 Int32) (V2 0 1 :: V2 Int32) (V4 (-1) (-1) (-1) 1 :: V4 Double) 0 LineType_8 0 lift $ line imgM (V2 1 1 :: V2 Int32) (V2 1 1 :: V2 Int32) (V4 1 1 1 1 :: V4 Double) 0 LineType_8 0 lift $ line imgM (V2 1 2 :: V2 Int32) (V2 2 1 :: V2 Int32) (V4 1 1 1 1 :: V4 Double) 0 LineType_8 0 lift $ line imgM (V2 2 2 :: V2 Int32) (V2 2 2 :: V2 Int32) (V4 2 2 2 1 :: V4 Double) 0 LineType_8 0  &doc/generated/examples/filter2DImg.png filter2DImg Ohttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/filtering.html#filter2dOpenCV Sphinx doc8Dilates an image by using a specific structuring elementExample: RdilateImg :: forall (width :: Nat) (width2 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Lambda , width2 ~ ((*) width 2) -- TODO (RvD): HSE parse error with infix type operator ) => Mat (ShapeT [height, width2]) ('S channels) ('S depth) dilateImg = exceptError $ withMatM (Proxy :: Proxy [height, width2]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do dilatedLambda <- pureExcept $ dilate lambda Nothing (Nothing :: Maybe Point2i) 3 BorderReplicate matCopyToM imgM (V2 0 0) lambda Nothing matCopyToM imgM (V2 w 0) dilatedLambda Nothing where w = fromInteger $ natVal (Proxy :: Proxy width)  $doc/generated/examples/dilateImg.png dilateImg Mhttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/filtering.html#dilateOpenCV Sphinx doc/Performs advanced morphological transformations Shttp://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/filtering.html#morphologyexOpenCV Sphinx docZReturns a structuring element of the specified size and shape for morphological operationsExample: type StructureImg = Mat (ShapeT [128, 128]) ('S 1) ('S Word8) structureImg :: MorphShape -> StructureImg structureImg shape = exceptError $ do mat <- getStructuringElement shape (Proxy :: Proxy 128) (Proxy :: Proxy 128) img <- matConvertTo (Just 255) Nothing mat bitwiseNot img morphRectImg :: StructureImg morphRectImg = structureImg MorphRect morphEllipseImg :: StructureImg morphEllipseImg = structureImg MorphEllipse morphCrossImg :: StructureImg morphCrossImg = structureImg $ MorphCross $ toPoint (pure (-1) :: V2 Int32)  'doc/generated/examples/morphRectImg.png morphRectImg  *doc/generated/examples/morphEllipseImg.pngmorphEllipseImg (doc/generated/examples/morphCrossImg.png morphCrossImg \http://docs.opencv.org/3.0-last-rst/modules/imgproc/doc/filtering.html#getstructuringelementOpenCV Sphinx doc'       tAperture size used to compute the second-derivative filters. The size must be positive and odd. Default value is 1.MOptional scale factor for the computed Laplacian values. Default value is 1.GOptional delta value that is added to the results. Default value is 0.Pixel extrapolation method.SInput 1-, 3-, or 4-channel image; when ksize is 3 or 5, the image depth should be  ,  , or  -, for larger aperture sizes, it can only be  .QAperture linear size; it must be odd and greater than 1, for example: 3, 5, 7...Blurring kernel size.Blurring kernel size.sigmaXsigmaY Input image.)Structuring element used for erosion. If  is used a 3x3G rectangular structuring element is used. Kernel can be created using .anchor iterations Input image.convolution kernel (or rather a correlation kernel), a single-channel floating point matrix; if you want to apply different kernels to different channels, split the image into separate color planes using split and process them individually.anchordelta Input image.*Structuring element used for dilation. If  is used a 3x3G rectangular structuring element is used. Kernel can be created using .anchor iterations Source image."Type of a morphological operation.Structuring element. Anchor position with the kernel.1Number of times erosion and dilation are applied.        None+,9:;DLQRST[      !None+,9:;DLQRST[Example: carAnim :: Animation (ShapeT [240, 320]) ('S 3) ('S Word8) carAnim = carOverhead mog2Anim :: IO (Animation (ShapeT [240, 320]) ('S 3) ('S Word8)) mog2Anim = do mog2 <- newBackgroundSubtractorMOG2 Nothing Nothing Nothing forM carOverhead $ (delay, img) -> do fg <- bgSubApply mog2 0.1 img fgBgr <- exceptErrorIO $ pureExcept $ cvtColor gray bgr fg pure (delay, fgBgr)  Original: doc/generated/examples/car.gifcarAnim Foreground: doc/generated/examples/mog2.gifmog2Anim            Length of the history.Threshold on the squared distance between the pixel and the sample to decide whether a pixel is close to that sample. This parameter does not affect the background update.If  , the algorithm will detect shadows and mark them. It decreases the speed a bit, so if you do not need this feature, set the parameter to  .Length of the history.Threshold on the squared Mahalanobis distance between the pixel and the model to decide whether a pixel is well described by the background model. This parameter does not affect the background update.If  , the algorithm will detect shadows and mark them. It decreases the speed a bit, so if you do not need this feature, set the parameter to  .            "None+,9:;DLQRST[9The API might change in the future, but currently we can:Open/create a new file:  wr <-  $  (I "tst.MOV" "avc1" 30 (3840, 2160) ) kNow, we can write some frames, but they need to have exactly the same size as the one we have opened with:   $  wr img =We need to close at the end or it will not finalize the file:   $  wr       !        !None+,9:;DLQRST[Flip around the x-axis.Flip around the y-axis.Flip around both x and y-axis.4Calculates an absolute value of each matrix element. Rhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#absOpenCV Sphinx docBCalculates the per-element absolute difference between two arrays.Example: vmatAbsDiffImg :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) matAbsDiffImg = matAbsDiff flower_512x341 sailboat_512x341  (doc/generated/examples/matAbsDiffImg.png matAbsDiffImg Vhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#absdiffOpenCV Sphinx doc-Calculates the per-element sum of two arrays.Example: jmatAddImg :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) matAddImg = matAdd flower_512x341 sailboat_512x341  $doc/generated/examples/matAddImg.png matAddImg Rhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#addOpenCV Sphinx doc 8Calculates the per-element difference between two arraysExample: ymatSubtractImg :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) matSubtractImg = matSubtract flower_512x341 sailboat_512x341  )doc/generated/examples/matSubtractImg.pngmatSubtractImg Whttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#subtractOpenCV Sphinx doc )Calculates the weighted sum of two arraysExample: matAddWeightedImg :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) matAddWeightedImg = exceptError $ matAddWeighted flower_512x341 0.5 sailboat_512x341 0.5 0.0  ,doc/generated/examples/matAddWeightedImg.pngmatAddWeightedImg Zhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#addweightedOpenCV Sphinx doc 7Calculates the sum of a scaled array and another array.The function scaleAdd is one of the classical primitive linear algebra operations, known as DAXPY or SAXPY in BLAS. It calculates the sum of a scaled array and another array. Whttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#scaleaddOpenCV Sphinx docExample:  bitwiseNotImg :: Mat (ShapeT VennShape) ('S 3) ('S Word8) bitwiseNotImg = exceptError $ do img <- bitwiseNot vennCircleAImg imgBgr <- cvtColor gray bgr img createMat $ do imgM <- lift $ thaw imgBgr lift $ vennCircleA imgM blue 2 pure imgM  (doc/generated/examples/bitwiseNotImg.png bitwiseNotImg Zhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#bitwise-notOpenCV Sphinx docExample: AbitwiseAndImg :: Mat (ShapeT VennShape) ('S 3) ('S Word8) bitwiseAndImg = exceptError $ do img <- bitwiseAnd vennCircleAImg vennCircleBImg imgBgr <- cvtColor gray bgr img createMat $ do imgM <- lift $ thaw imgBgr lift $ vennCircleA imgM blue 2 lift $ vennCircleB imgM red 2 pure imgM  (doc/generated/examples/bitwiseAndImg.png bitwiseAndImg Zhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#bitwise-andOpenCV Sphinx docExample: >bitwiseOrImg :: Mat (ShapeT VennShape) ('S 3) ('S Word8) bitwiseOrImg = exceptError $ do img <- bitwiseOr vennCircleAImg vennCircleBImg imgBgr <- cvtColor gray bgr img createMat $ do imgM <- lift $ thaw imgBgr lift $ vennCircleA imgM blue 2 lift $ vennCircleB imgM red 2 pure imgM  'doc/generated/examples/bitwiseOrImg.png bitwiseOrImg Yhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#bitwise-orOpenCV Sphinx docExample: AbitwiseXorImg :: Mat (ShapeT VennShape) ('S 3) ('S Word8) bitwiseXorImg = exceptError $ do img <- bitwiseXor vennCircleAImg vennCircleBImg imgBgr <- cvtColor gray bgr img createMat $ do imgM <- lift $ thaw imgBgr lift $ vennCircleA imgM blue 2 lift $ vennCircleB imgM red 2 pure imgM  (doc/generated/examples/bitwiseXorImg.png bitwiseXorImg Zhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#bitwise-xorOpenCV Sphinx docBCreates one multichannel array out of several single-channel ones. Thttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#mergeOpenCV Sphinx docADivides a multi-channel array into several single-channel arrays.Example: matSplitImg :: forall (width :: Nat) (width3 :: Nat) (height :: Nat) (channels :: Nat) (depth :: *) . ( Mat (ShapeT [height, width]) ('S channels) ('S depth) ~ Birds_512x341 , width3 ~ ((*) width 3) ) => Mat (ShapeT [height, width3]) ('S channels) ('S depth) matSplitImg = exceptError $ do zeroImg <- mkMat (Proxy :: Proxy [height, width]) (Proxy :: Proxy 1) (Proxy :: Proxy depth) black let blueImg = matMerge $ V.fromList [channelImgs V.! 0, zeroImg, zeroImg] greenImg = matMerge $ V.fromList [zeroImg, channelImgs V.! 1, zeroImg] redImg = matMerge $ V.fromList [zeroImg, zeroImg, channelImgs V.! 2] withMatM (Proxy :: Proxy [height, width3]) (Proxy :: Proxy channels) (Proxy :: Proxy depth) white $ \imgM -> do matCopyToM imgM (V2 (w*0) 0) (unsafeCoerceMat blueImg) Nothing matCopyToM imgM (V2 (w*1) 0) (unsafeCoerceMat greenImg) Nothing matCopyToM imgM (V2 (w*2) 0) (unsafeCoerceMat redImg) Nothing where channelImgs = matSplit birds_512x341 w :: Int32 w = fromInteger $ natVal (Proxy :: Proxy width)  &doc/generated/examples/matSplitImg.png matSplitImg Thttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#splitOpenCV Sphinx doc4Apply the same 1 dimensional action to every channel0Finds the global minimum and maximum in an array Xhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#minmaxlocOpenCV Sphinx doc!Calculates an absolute array norm Shttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#normOpenCV Sphinx docECalculates an absolute difference norm, or a relative difference norm Shttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#normOpenCV Sphinx doc.Normalizes the norm or value range of an array Xhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#normalizeOpenCV Sphinx doc$Calculates the sum of array elementsExample: matSumImg :: Mat (ShapeT [201, 201]) ('S 3) ('S Word8) matSumImg = exceptError $ withMatM (Proxy :: Proxy [201, 201]) (Proxy :: Proxy 3) (Proxy :: Proxy Word8) black $ \imgM -> do -- Draw a filled circle. Each pixel has a value of (255,255,255) lift $ circle imgM (pure radius :: V2 Int32) radius white (-1) LineType_8 0 -- Calculate the sum of all pixels. scalar <- matSumM imgM let V4 area _y _z _w = fromScalar scalar :: V4 Double -- Circle area = pi * radius * radius let approxPi = area / 255 / (radius * radius) lift $ putText imgM (T.pack $ show approxPi) (V2 40 110 :: V2 Int32) (Font FontHersheyDuplex NotSlanted 1) blue 1 LineType_AA False where radius :: forall a. Num a => a radius = 100  $doc/generated/examples/matSumImg.png matSumImg Rhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#sumOpenCV Sphinx doc:Calculates a mean and standard deviation of array elements Yhttp://docs.opencv.org/3.0-last-rst/modules/core/doc/operations_on_arrays.html#meanstddevOpenCV Sphinx doc<Flips a 2D matrix around vertical, horizontal, or both axes._The example scenarios of using the function are the following: Vertical flipping of the image () to switch between top-left and bottom-left image origin. This is a typical operation in video processing on Microsoft Windows* OS. Horizontal flipping of the image with the subsequent horizontal shift and absolute difference calculation to check for a vertical-axis symmetry (). Simultaneous horizontal and vertical flipping of the image with the subsequent shift and absolute difference calculation to check for a central symmetry ((). Reversing the order of point arrays ( or ).Example: gmatFlipImg :: Mat (ShapeT [341, 512]) ('S 3) ('S Word8) matFlipImg = matFlip sailboat_512x341 FlipBoth  %doc/generated/examples/matFlipImg.png matFlipImgTransposes a matrix.Example: mmatTransposeImg :: Mat (ShapeT [512, 341]) ('S 3) ('S Word8) matTransposeImg = matTranspose sailboat_512x341  *doc/generated/examples/matTransposeImg.pngmatTransposeImg3Applies horizontal concatenation to given matrices.Example: hconcatImg :: Mat ('S '[ 'D, 'D ]) ('S 3) ('S Word8) hconcatImg = exceptError $ hconcat $ V.fromList [ halfSize birds_768x512 , halfSize flower_768x512 , halfSize sailboat_768x512 ] where halfSize = exceptError . resize (ResizeRel 0.5) InterArea  %doc/generated/examples/hconcatImg.png hconcatImg1Applies vertical concatenation to given matrices.Example: vconcatImg :: Mat ('S '[ 'D, 'D ]) ('S 3) ('S Word8) vconcatImg = exceptError $ vconcat $ V.fromList [ halfSize birds_768x512 , halfSize flower_768x512 , halfSize sailboat_768x512 ] where halfSize = exceptError . resize (ResizeRel 0.5) InterArea  %doc/generated/examples/vconcatImg.png vconcatImg; " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ;  src1alphasrc2betagamma First input array.!Scale factor for the first array.Second input array.  OOptional operation mask; it must have the same size as the input array, depth  and 1 channel. Input array.Calculated norm.Absolute or relative norm.OOptional operation mask; it must have the same size as the input array, depth  and 1 channel.First input array.:Second input array of the same size and type as the first.Calculated norm.[Norm value to normalize to or the lower range boundary in case of the range normalization.dUpper range boundary in case of the range normalization; it is not used for the norm normalization.Optional operation mask. Input array.0Input array that must have from 1 to 4 channels.0Input array that must have from 1 to 4 channels.Optional operation mask. How to flip. <2     2     8 " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ;      <#None+,9:;DLQRST[)&A regular method using all the points.*RANSAC-based robust method.+Least-Median robust method.,PROSAC-based robust method.5KCalculates a fundamental matrix from the corresponding points in two images5The minimum number of points required depends on the 0.1: N == 72: N >= 83: N >= 154: N >= 8With 7 points the 1* method is used, despite the given method.With more than 7 points the 1 method will be replaced by the 2 method.Between 7 and 15 points the 3 method will be replaced by the 4 method. With the 1 method and with 7 points the result can contain up to 3 matrices, resulting in either 3, 6 or 9 rows. This is why the number of resulting rows in tagged as @ynamic. For all other methods the result always contains 3 rows. xhttp://docs.opencv.org/3.0-last-rst/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#findfundamentalmatOpenCV Sphinx doc7_For points in an image of a stereo pair, computes the corresponding epilines in the other image http://docs.opencv.org/3.0-last-rst/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#computecorrespondepilinesOpenCV Sphinx doc = > ?"#$%&'()*+,-./01234 @ A B5Points from the first image.Points from the second image.6Points from the first image.Points from the second image.7Points. Image which contains the points.Fundamental matrix.8"#$%&'()*+,-./0123456701234()*+,"#$%&'-./567 = > ?"#$%&'()*+,-./01234 @ A B5678GNone+,9:;DLQRST[ mnpoqrstuvwxyz{|}~ #(-27<AFGHIJKLMNOPQRSTUVWXZY[\]^`abcdfghijklopqrstuvxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijkoptuyz~  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNQRSTUVWXYZ[\]^_`abcfghijklmnopqrstuvwxyz      !"#$%&'()*+,-./012;<=>?@ABCDEFGHIJKLWXYZ[\]^_`abcdefghijkopqrstuvwxyz{|}~     "#$%&'()*+,-./01234567$None$+,9:;ADLOQRST[?.An OpenCV 2D-filter preserving the matrix type@An OpenCV bidimensional matrixA'map Pixel types to a number of channelsBmap Pixel types to a depthC5Compute an OpenCV 2D-matrix from a JuicyPixels image.Example: /fromImageImg :: IO (Mat ('S '[ 'D, 'D]) ('S 3) ('S Word8)) fromImageImg = do r <- Codec.Picture.readImage "data/Lenna.png" case r of Left err -> error err Right (Codec.Picture.ImageRGB8 img) -> pure $ OpenCV.Juicy.fromImage img Right _ -> error "Unhandled JuicyPixels format!"  'doc/generated/examples/fromImageImg.png fromImageImgD4Compute a JuicyPixels image from an OpenCV 2D-matrix>FIXME: There's a bug in the colour conversions in the example:Example: toImageImg :: IO (Mat ('S '[ 'D, 'D]) ('S 3) ('S Word8)) toImageImg = exceptError . cvtColor rgb bgr . from . to . exceptError . cvtColor bgr rgb <$> fromImageImg where to :: OpenCV.Juicy.Mat2D 'D 'D ('S 3) ('S Word8) -> Codec.Picture.Image Codec.Picture.PixelRGB8 to = OpenCV.Juicy.toImage from :: Codec.Picture.Image Codec.Picture.PixelRGB8 -> OpenCV.Juicy.Mat2D 'D 'D ('S 3) ('S Word8) from = OpenCV.Juicy.fromImage  %doc/generated/examples/toImageImg.png toImageImgE_Apply an OpenCV 2D-filter to a JuicyPixels dynamic matrix, preserving the Juicy pixel encoding?@AB C D ECJuicyPixels imageDOpenCV 2D-matrixEOpenCV 2D-filterJuicyPixels dynamic image FFGHIJKL?@ABCDE@?ABCDE?@AB C D ECDE FFGHIJKL GHIJKLMNOPQRSTUVWWXXYZ[\]^_`abcdefghi&j&k&l&m&n&o&p&q&r&s&t&u&v&w&x&y&z&{&|&}&~&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&(((((((***************************************++++++++++++++..................000111111122 2 2 2 2 22333334444445====== =!="=#=$=%=&='=(="=)=*=+=, - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~        >>>>>>>>???????????????????????????????         @@@@@@@@@@@@@@@@BBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC C C C C CCCCCCCCCCCCCCCCCCC C!C"C#C$C%C&C'C(C)C*C+C,C-C.C/C0C1C2C3C4C5C6C7C8C9C:C;C<C=C>C?C@CACBCCCDCECFCGCHCICJCKCLCMCNCOCPCQCRCSCTCUCVCWCXCYCZC[C\C]C^C_C`CaCbCcCdCeCfCgChCiCjCkClCmCnCoCpCqCrCsCtCuCvCwCxCyCzC{C|C}C~CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUDVDWDXDYDZD[D\]^_`abcdefgghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQQRSTUVWXYZ[\]^_`abcdefghijklmmnopqrsstuvwxyz{|}~EEEEEEEEEEEEEEEEEEE                          !!!!!!!!!!!!!!!"""" "!"""#"$"%"&"'"(")"*"+,-./0123456789:;<=>?@ABCDEFGHIJKLM#N#N#O#P#Q#R#S#T#U#V#W#X#Y#Z#[#\#]#^#_#`#a#b#c#d#e#f#g#h#i$j$k$l$m$n$o$p$q$r$s$t$u$v$wxyz{%|%}%~%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%&&&&'''''''(())***************************++++++++++++++++++,,,,,,,,,,,,,,,- - - - - ----------......... .!.".#.$.%.&.'/(/)/*/+/,/-/.0/000102030405060708090:0;0<0=0>0?0@0A0B0C0D0E0F0GxyHxIJ0K0L0M0N0O0P0Q0R0S0T0U0V0W0X0Y0Z0[0\0]0^0_0`0a0b0c0d0e0f0g0h0i0j0k0l0m0n0o0p0q0r0s0t0u0v0w0x0y0z0{0|0}0~0000000000000000000000001112222233334444445567789:;<===============                                                                                          > >!x"#>$>%>&>'>(>)>*>+>,>->.>/>>0>>1>>2>>3>4>5>6>7>8>9>:>;><>=>>>?>@>A>B>C>D>E>F>G?HxIJxIK?L?M?NOPQ?R?S?T?U?V?W?X?Y?Z?[?\?]??^?_?`?a?b?c?d?e?f?g?h?i?j k l m n o p@@q@r@s@t@u@v@wAxyz{|}~BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB B B B B BBBBBBBBBBBBBBBBBBB B!B"B#B$B%B&B'B(B)B*B+B,B-B.B/B0B1B2B3B4B5B6B7B8B9B:B;B<B=B>B?B@BABBBCBDBEBFBGBHBIBJBKBLBMBNBOBPBQBRBSBTBUBVBWBXBYBZB[B\B]B^B_B`BaBbBcBdBeBfBgBhCiCjCkClCmCnCoCpCqCrCsCtCuCvCwCxCyCzC{C|C}C~CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C !C "C #C $C %C &C 'C (C )C *C +C ,C -C .C /C 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~                                                            D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D         ! " # $ % &n| ' ( ) * * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { | | } ~                        -                                                     x  x                                    EE E E E E E E E F F F F F F F F F F F F F F                           ! " # $ % &  '  ' (! )! *! +! ,! -! .! /! 0! 1! 2! 3! 4!! 5!! 6  7" 8" 9" :" ;" <"%" = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X# Y# Z# [# \# ]# ^$ _$ `$ a$ b c%opencv-0.0.1.0-7rccqxaVltZGT7fzBeTepnOpenCV.TypeLevelOpenCV.VideoIO.TypesOpenCV.Core.TypesOpenCV.Core.Types.Mat OpenCV.UnsafeOpenCV.ImgCodecsOpenCV.Core.ArrayOpsOpenCV.Core.Types.MatxOpenCV.Core.Types.PointOpenCV.Core.Types.SizeOpenCV.Core.Types.VecOpenCV.Exception OpenCV.PhotoOpenCV.Core.Types.RectOpenCV.ImgProc.MiscImgTransform*OpenCV.ImgProc.MiscImgTransform.ColorCodesOpenCV.Core.Types.Mat.RepaOpenCV.HighGui OpenCV.VideoOpenCV.VideoIO.VideoCaptureOpenCV.Features2d OpenCV.ImgProc.CascadeClassifierOpenCV.ImgProc.ColorMapsOpenCV.ImgProc.DrawingOpenCV.ImgProc.FeatureDetectionOpenCV.ImgProc.ObjectDetection!OpenCV.ImgProc.StructuralAnalysisOpenCV.ImgProc.TypesOpenCV.Core.Types.Mat.HMat$OpenCV.ImgProc.GeometricImgTransformOpenCV.ImgProc.ImgFiltering OpenCV.JSONOpenCV.Video.MotionAnalysisOpenCV.VideoIO.VideoWriterOpenCV.Calib3d OpenCV.Juicy!OpenCV.Internal.VideoIO.ConstantsOpenCV.Internal.VideoIO.TypesOpenCV.Internal.Photo.ConstantsOpenCV.Internal.Mutable2OpenCV.Internal.ImgProc.MiscImgTransform.TypeLevelOpenCV.Internal.ImgCodecs$OpenCV.Internal.Core.Types.Mat.Depth&OpenCV.Internal.Core.Types.Mat.Marshal$OpenCV.Internal.Core.Types.ConstantsOpenCV.Internal.Core.ArrayOps!OpenCV.Internal.Calib3d.ConstantsOpenCV.Internal.C.TypesOpenCV.Internal.Core.Types.Matx OpenCV.Internal.Core.Types.PointOpenCV.Internal.Core.Types.SizeOpenCV.Internal.Core.Types.VecOpenCV.Internal.C.PlacementNew!OpenCV.Internal.C.PlacementNew.THOpenCV.Internal.C.InlineOpenCV.Internal"OpenCV.Internal.Core.Types.Matx.TH#OpenCV.Internal.Core.Types.Point.TH"OpenCV.Internal.Core.Types.Size.TH!OpenCV.Internal.Core.Types.Vec.THOpenCV.Internal.ExceptionOpenCV.Internal.Core.TypesOpenCV.Internal.Core.Types.MatOpenCV.Internal.Core.Types.Rect"OpenCV.Internal.Core.Types.Rect.TH(OpenCV.Internal.ImgProc.MiscImgTransform3OpenCV.Internal.ImgProc.MiscImgTransform.ColorCodes%OpenCV.Internal.Core.Types.Mat.ToFrom#OpenCV.Internal.Core.Types.Mat.HMatOpenCV.Internal.ImgProc.TypesOpenCVIsStaticAllMayRelaxRelaxDSNatsDSNatInElemLength ToNatListDS toNatListDSToNatDStoNatDSToInt32toInt32:::ZDSDS dsToMaybe $fIsStaticaS$fAllap: $fAllkp[]$fPrivateIsStaticaS$fToNatListDSProxy$fToNatListDSproxy$fToNatDSProxy$fToNatDSproxy$fToInt32proxy$fToInt32Int32$fShowDS$fEqDS $fFunctorDSVideoCaptureAPI VideoCapAny VideoCapVfw VideoCapV4l VideoCapV4l2VideoCapFirewireVideoCapFirewareVideoCapIeee1394VideoCapDc1394VideoCapCmu1394 VideoCapQtVideoCapUnicap VideoCapDshow VideoCapPvapiVideoCapOpenniVideoCapOpenniAsusVideoCapAndroid VideoCapXiapiVideoCapAvfoundationVideoCapGiganetix VideoCapMsmf VideoCapWinrtVideoCapIntelpercVideoCapOpenni2VideoCapOpenni2AsusVideoCapGphoto2VideoCapGstreamerVideoCapFfmpegVideoCapImagesVideoCapturePropertiesVideoCapPropPosMsecVideoCapPropPosFramesVideoCapPropPosAviRatioVideoCapPropFrameWidthVideoCapPropFrameHeightVideoCapPropFpsVideoCapPropFourCcVideoCapPropFrameCountVideoCapPropFormatVideoCapPropModeVideoCapPropBrightnessVideoCapPropContrastVideoCapPropSaturationVideoCapPropHueVideoCapPropGainVideoCapPropExposureVideoCapPropConvertRgbVideoCapPropWhiteBalanceBlueUVideoCapPropRectificationVideoCapPropMonochromeVideoCapPropSharpnessVideoCapPropAutoExposureVideoCapPropGammaVideoCapPropTemperatureVideoCapPropTriggerVideoCapPropTriggerDelayVideoCapPropWhiteBalanceRedVVideoCapPropZoomVideoCapPropFocusVideoCapPropGuidVideoCapPropIsoSpeedVideoCapPropBacklightVideoCapPropPanVideoCapPropTiltVideoCapPropRollVideoCapPropIrisVideoCapPropSettingsVideoCapPropBuffersizeVideoCapPropAutofocusVideoCapPropIntFourCCunFourCC FreezeThawfreezethaw unsafeFreeze unsafeThawMutableMut OutputFormat OutputBmp OutputExr OutputHdr OutputJpegOutputJpeg2000 OutputPng OutputPxm OutputSunras OutputTiff OutputWebP PngParamspngParamCompressionpngParamStrategypngParamBinaryLevel PngStrategyPngStrategyDefaultPngStrategyFilteredPngStrategyHuffmanOnlyPngStrategyRLEPngStrategyFixed JpegParamsjpegParamQualityjpegParamProgressivejpegParamOptimizejpegParamRestartIntervaljpegParamLumaQualityjpegParamChromaQuality ImreadModeImreadUnchangedImreadGrayscale ImreadColorImreadAnyDepthImreadAnyColorImreadLoadGdaldefaultJpegParamsdefaultPngParamsDepthT ToDepthDS toDepthDSToDepthtoDepthDepthDepth_8UDepth_8S Depth_16U Depth_16S Depth_32S Depth_32F Depth_64FDepth_USRTYPE1 NormAbsRel NormRelative NormAbsoluteNormTypeNorm_InfNorm_L1Norm_L2 Norm_L2SQR Norm_Hamming Norm_Hamming2 Norm_MinMaxCmpTypeCmp_EqCmp_GtCmp_GeCmp_LtCmp_LeCmp_NeFromPtrWithPtrCSizeOfIsMatxtoMatxfromMatxtoMatxIOMatxDimCMatxDimRMatxIsPoint3IsPoint2IsPointtoPoint fromPoint toPointIOPointDimPointIsSizetoSizefromSizetoSizeIOSizeIsVectoVecfromVectoVecIOVecDimVec PlacementNew CvExceptTCvExceptCvCppExceptionExpectationError expectedValue actualValueCoerceMatError ShapeError SizeError ChannelError DepthError CvExceptionBindingException pureExcept exceptError exceptErrorIO exceptErrorMVec2i$fPlacementNewC'Vec$fIsVecV2Int32$fIsVecVecInt32 $fFromPtrVecVec2f$fPlacementNewC'Vec0$fIsVecV2CFloat$fIsVecVecCFloat $fFromPtrVec0Vec2d$fPlacementNewC'Vec1$fIsVecV2CDouble$fIsVecVecCDouble $fFromPtrVec1Vec3i$fPlacementNewC'Vec2$fIsVecV3Int32$fIsVecVecInt320 $fFromPtrVec2Vec3f$fPlacementNewC'Vec3$fIsVecV3CFloat$fIsVecVecCFloat0 $fFromPtrVec3Vec3d$fPlacementNewC'Vec4$fIsVecV3CDouble$fIsVecVecCDouble0 $fFromPtrVec4Vec4i$fPlacementNewC'Vec5$fIsVecV4Int32$fIsVecVecInt321 $fFromPtrVec5Vec4f$fPlacementNewC'Vec6$fIsVecV4CFloat$fIsVecVecCFloat1 $fFromPtrVec6Vec4d$fPlacementNewC'Vec7$fIsVecV4CDouble$fIsVecVecCDouble1 $fFromPtrVec7Size2i$fPlacementNewC'Size$fIsSizeV2Int32$fIsSizeSizeInt32 $fFromPtrSizeSize2f$fPlacementNewC'Size0$fIsSizeV2CFloat$fIsSizeSizeCFloat$fFromPtrSize0Size2d$fPlacementNewC'Size1$fIsSizeV2CDouble$fIsSizeSizeCDouble$fFromPtrSize1Point2i$fPlacementNewC'Point$fIsPointV2Int32$fIsPointPointInt32$fFromPtrPointPoint2f$fPlacementNewC'Point0$fIsPointV2CFloat$fIsPointPointCFloat$fFromPtrPoint0Point2d$fPlacementNewC'Point1$fIsPointV2CDouble$fIsPointPointCDouble$fFromPtrPoint1Point3i$fPlacementNewC'Point2$fIsPointV3Int32$fIsPointPointInt320$fFromPtrPoint2Point3f$fPlacementNewC'Point3$fIsPointV3CFloat$fIsPointPointCFloat0$fFromPtrPoint3Point3d$fPlacementNewC'Point4$fIsPointV3CDouble$fIsPointPointCDouble0$fFromPtrPoint4 FromScalar fromScalarToScalartoScalarRange TermCriteria RotatedRectScalar ToChannelsDS ToChannels ToShapeDS toShapeDSToShapetoShape ChannelsTShapeTMatInfomiShapemiDepth miChannelsMat typeCheckMatrelaxMat coerceMatunsafeCoerceMatmkMatcloneMat typeCheckMatM relaxMatM coerceMatMunsafeCoerceMatMmkMatM createMatwithMatM cloneMatMmatInfo toChannels toChannelsDS unsafeRead unsafeWriteInpaintingMethodInpaintNavierStokes InpaintTeleainpaintfastNlMeansDenoisingColored fastNlMeansDenoisingColoredMulti denoise_TVL1decolor$fShowInpaintingMethodIsRecttoRectfromRecttoRectIO rectTopLeftrectBottomRightrectSizerectArea rectContainsRectSize RectPointHRect hRectTopLeft hRectSizeRectRect2i$fPlacementNewC'Rect$fIsRectHRectInt32$fIsRectRectInt32 $fFromPtrRectRect2f$fPlacementNewC'Rect0$fIsRectHRectCFloat$fIsRectRectCFloat$fFromPtrRect0Rect2dfmapRect$fPlacementNewC'Rect1$fIsRectHRectCDouble$fIsRectRectCDouble$fFromPtrRect1GrabCutOperationModeGrabCut_InitWithRectGrabCut_InitWithMaskGrabCut_InitWithRectAndMask GrabCut_Eval ThreshValue ThreshVal_AbsThreshVal_OtsuThreshVal_Triangle ThreshType Thresh_BinaryThresh_BinaryInvThresh_Truncate Thresh_ToZeroThresh_ToZeroInvColorCodeDepthColorCodeMatchesChannelsColorCodeChannels ColorCodeBayerBGBayerGBBayerGRBayerRGBGRBGR555BGR565BGRA BGRA_I420 BGRA_IYUV BGRA_NV12 BGRA_NV21 BGRA_UYNV BGRA_UYVY BGRA_Y422 BGRA_YUNV BGRA_YUY2 BGRA_YUYV BGRA_YV12 BGRA_YVYUBGR_EABGR_FULLBGR_I420BGR_IYUVBGR_NV12BGR_NV21BGR_UYNVBGR_UYVYBGR_VNGBGR_Y422BGR_YUNVBGR_YUY2BGR_YUYVBGR_YV12BGR_YVYUGRAYGRAY_420 GRAY_I420 GRAY_IYUV GRAY_NV12 GRAY_NV21 GRAY_UYNV GRAY_UYVY GRAY_Y422 GRAY_YUNV GRAY_YUY2 GRAY_YUYV GRAY_YV12 GRAY_YVYUHLSHLS_FULLHSVHSV_FULLLabLBGRLRGBLuvMRGBARGBRGBA RGBA_I420 RGBA_IYUV RGBA_NV12 RGBA_NV21 RGBA_UYNV RGBA_UYVY RGBA_Y422 RGBA_YUNV RGBA_YUY2 RGBA_YUYV RGBA_YV12 RGBA_YVYURGB_EARGB_FULLRGB_I420RGB_IYUVRGB_NV12RGB_NV21RGB_UYNVRGB_UYVYRGB_VNGRGB_Y422RGB_YUNVRGB_YUY2RGB_YUYVRGB_YV12RGB_YVYUXYZYCrCbYUVYUV420pYUV420spYUV_I420YUV_IYUVYUV_YV12ColorConversionbayerBGbayerGBbayerGRbayerRGbgrbgr555bgr565bgra bgra_I420 bgra_IYUV bgra_NV12 bgra_NV21 bgra_UYNV bgra_UYVY bgra_Y422 bgra_YUNV bgra_YUY2 bgra_YUYV bgra_YV12 bgra_YVYUbgr_EAbgr_FULLbgr_I420bgr_IYUVbgr_NV12bgr_NV21bgr_UYNVbgr_UYVYbgr_VNGbgr_Y422bgr_YUNVbgr_YUY2bgr_YUYVbgr_YV12bgr_YVYUgraygray_420 gray_I420 gray_IYUV gray_NV12 gray_NV21 gray_UYNV gray_UYVY gray_Y422 gray_YUNV gray_YUY2 gray_YUYV gray_YV12 gray_YVYUhlshls_FULLhsvhsv_FULLlablbgrlrgbluvmrgbargbrgba rgba_I420 rgba_IYUV rgba_NV12 rgba_NV21 rgba_UYNV rgba_UYVY rgba_Y422 rgba_YUNV rgba_YUY2 rgba_YUYV rgba_YV12 rgba_YVYUrgb_EArgb_FULLrgb_I420rgb_IYUVrgb_NV12rgb_NV21rgb_UYNVrgb_UYVYrgb_VNGrgb_Y422rgb_YUNVrgb_YUY2rgb_YUYVrgb_YV12rgb_YVYUxyzyCrCbyuvyuv420pyuv420spyuv_I420yuv_IYUVyuv_YV12Matx12f newMatx12f$fPlacementNewC'Matx$fIsMatxMatxCFloat $fFromPtrMatxMatx12d newMatx12d$fPlacementNewC'Matx0$fIsMatxMatxCDouble$fFromPtrMatx0Matx13f newMatx13f$fPlacementNewC'Matx1$fIsMatxMatxCFloat0$fFromPtrMatx1Matx13d newMatx13d$fPlacementNewC'Matx2$fIsMatxMatxCDouble0$fFromPtrMatx2Matx14f newMatx14f$fPlacementNewC'Matx3$fIsMatxMatxCFloat1$fFromPtrMatx3Matx14d newMatx14d$fPlacementNewC'Matx4$fIsMatxMatxCDouble1$fFromPtrMatx4Matx16f newMatx16f$fPlacementNewC'Matx5$fIsMatxMatxCFloat2$fFromPtrMatx5Matx16d newMatx16d$fPlacementNewC'Matx6$fIsMatxMatxCDouble2$fFromPtrMatx6Matx21f newMatx21f$fPlacementNewC'Matx7$fIsMatxMatxCFloat3$fFromPtrMatx7Matx21d newMatx21d$fPlacementNewC'Matx8$fIsMatxMatxCDouble3$fFromPtrMatx8Matx22f newMatx22f$fPlacementNewC'Matx9$fIsMatxMatxCFloat4$fFromPtrMatx9Matx22d newMatx22d$fPlacementNewC'Matx10$fIsMatxMatxCDouble4$fFromPtrMatx10Matx23f newMatx23f$fPlacementNewC'Matx11$fIsMatxMatxCFloat5$fFromPtrMatx11Matx23d newMatx23d$fPlacementNewC'Matx12$fIsMatxMatxCDouble5$fFromPtrMatx12Matx31f newMatx31f$fPlacementNewC'Matx13$fIsMatxMatxCFloat6$fFromPtrMatx13Matx31d newMatx31d$fPlacementNewC'Matx14$fIsMatxMatxCDouble6$fFromPtrMatx14Matx32f newMatx32f$fPlacementNewC'Matx15$fIsMatxMatxCFloat7$fFromPtrMatx15Matx32d newMatx32d$fPlacementNewC'Matx16$fIsMatxMatxCDouble7$fFromPtrMatx16Matx33f newMatx33f$fPlacementNewC'Matx17$fIsMatxMatxCFloat8$fFromPtrMatx17Matx33d newMatx33d$fPlacementNewC'Matx18$fIsMatxMatxCDouble8$fFromPtrMatx18Matx34f newMatx34f$fPlacementNewC'Matx19$fIsMatxMatxCFloat9$fFromPtrMatx19Matx34d newMatx34d$fPlacementNewC'Matx20$fIsMatxMatxCDouble9$fFromPtrMatx20Matx41f newMatx41f$fPlacementNewC'Matx21$fIsMatxMatxCFloat10$fFromPtrMatx21Matx41d newMatx41d$fPlacementNewC'Matx22$fIsMatxMatxCDouble10$fFromPtrMatx22Matx43f newMatx43f$fPlacementNewC'Matx23$fIsMatxMatxCFloat11$fFromPtrMatx23Matx43d newMatx43d$fPlacementNewC'Matx24$fIsMatxMatxCDouble11$fFromPtrMatx24Matx44f newMatx44f$fPlacementNewC'Matx25$fIsMatxMatxCFloat12$fFromPtrMatx25Matx44d newMatx44d$fPlacementNewC'Matx26$fIsMatxMatxCDouble12$fFromPtrMatx26Matx51f newMatx51f$fPlacementNewC'Matx27$fIsMatxMatxCFloat13$fFromPtrMatx27Matx51d newMatx51d$fPlacementNewC'Matx28$fIsMatxMatxCDouble13$fFromPtrMatx28Matx61f newMatx61f$fPlacementNewC'Matx29$fIsMatxMatxCFloat14$fFromPtrMatx29Matx61d newMatx61d$fPlacementNewC'Matx30$fIsMatxMatxCDouble14$fFromPtrMatx30Matx66f$fPlacementNewC'Matx31$fIsMatxMatxCFloat15$fFromPtrMatx31Matx66d$fPlacementNewC'Matx32$fIsMatxMatxCDouble15$fFromPtrMatx32DIMMtoRepa$fSourceMdepth $fNFDataArrayFromMatfromMatToMattoMatMatDepth MatChannelsMatShapeemptyMateyeMat matSubRect matCopyTo matConvertTo matFromFunc matCopyToMfoldMatTrackbarCallback MouseCallback EventFlagsRec flagsLButton flagsRButton flagsMButton flagsCtrlKey flagsShiftKey flagsAltKey EventFlagsEventEventMouseMoveEventLButtonDownEventRButtonDownEventMButtonDownEventLButtonUpEventRButtonUpEventMButtonUpEventLButtonDbClickEventRButtonDbClickEventMButtonDbClickEventMouseWheelEventMouseHWheelWindow makeWindow destroyWindow withWindow resizeWindowwaitKey hasLButton hasRButton hasMButton hasCtrlKey hasShiftKey hasAltKey flagsToRecsetMouseCallbackcreateTrackbarimshowimshowM $fShowEvent$fShowEventFlagsRecimdecode imdecodeMimencode imencodeMestimateRigidTransformVideoCaptureSourceVideoFileSourceVideoDeviceSource VideoCapturenewVideoCapturevideoCaptureOpenvideoCaptureReleasevideoCaptureIsOpenedvideoCaptureGrabvideoCaptureRetrievevideoCaptureGetDvideoCaptureGetIvideoCaptureSetDvideoCaptureSetI$fFromPtrVideoCapture$fWithPtrVideoCaptureKeyPoint mkRotatedRectrotatedRectCenterrotatedRectSizerotatedRectAnglerotatedRectBoundingRectrotatedRectPointsmkTermCriteriamkRange wholeRangeDMatch KeyPointReckptPointkptSizekptAngle kptResponse kptOctave kptClassId mkKeyPoint keyPointAsRec$fCSizeOfTYPEC'KeyPoint$fFromPtrKeyPoint$fWithPtrKeyPoint$fPlacementNewC'KeyPoint$fEqKeyPointRec$fShowKeyPointRec AlgorithmalgorithmClearStatealgorithmIsEmpty DMatchRecdmatchQueryIdxdmatchTrainIdx dmatchImgIdxdmatchDistancemkDMatch dmatchAsRec$fCSizeOfTYPEC'DMatch$fFromPtrDMatch$fWithPtrDMatch$fPlacementNewC'DMatch $fEqDMatchRec$fShowDMatchRecFlannBasedMatcherParams indexParams searchParamsFlannSearchParamschecksepssortedFlannIndexParamsFlannKDTreeIndexParamsFlannLshIndexParamstrees tableNumberkeySizemultiProbeLevelFlannBasedMatcher BFMatcherDescriptorMatcherupcastaddtrainmatchmatch'SimpleBlobDetectorParamsblob_minThresholdblob_maxThresholdblob_thresholdStepblob_minRepeatabilityblob_minDistBetweenBlobsblob_filterByAreablob_filterByCircularityblob_filterByColorblob_filterByConvexityblob_filterByInertiaBlobFilterByInertiablob_minInertiaRatioblob_maxInertiaRatioBlobFilterByConvexityblob_minConvexityblob_maxConvexityBlobFilterByColorblob_blobColorBlobFilterByCircularityblob_minCircularityblob_maxCircularityBlobFilterByArea blob_minArea blob_maxAreaSimpleBlobDetector OrbParams orb_nfeaturesorb_scaleFactor orb_nlevelsorb_edgeThresholdorb_firstLevel orb_WTA_K orb_scoreType orb_patchSizeorb_fastThreshold OrbScoreType HarrisScore FastScoreWTA_KWTA_K_2WTA_K_3WTA_K_4OrbdefaultOrbParamsmkOrborbDetectAndComputedefaultSimpleBlobDetectorParamsmkSimpleBlobDetector blobDetect newBFMatchernewFlannBasedMatcher drawMatches$fDefaultDrawMatchesParams$$fDescriptorMatcherFlannBasedMatcher $fDefaultFlannBasedMatcherParams$fDefaultFlannSearchParams$fDefaultFlannIndexParams$fFromPtrFlannBasedMatcher$fWithPtrFlannBasedMatcher$fDescriptorMatcherBFMatcher$fFromPtrBFMatcher$fWithPtrBFMatcher$fWithPtrBaseMatcher$fFromPtrSimpleBlobDetector$fWithPtrSimpleBlobDetector $fFromPtrOrb $fWithPtrOrb$fEqBlobFilterByArea$fEqBlobFilterByCircularity$fEqBlobFilterByColor$fEqBlobFilterByConvexity$fEqBlobFilterByInertiaCascadeClassifiernewCascadeClassifier!cascadeClassifierDetectMultiScale#cascadeClassifierDetectMultiScaleNC$fFromPtrCascadeClassifier$fWithPtrCascadeClassifierColorMapColorMapAutumn ColorMapBone ColorMapJetColorMapWinterColorMapRainbow ColorMapOceanColorMapSummerColorMapSpring ColorMapCool ColorMapHsv ColorMapPink ColorMapHotColorMapParula applyColorMapContourDrawModeOutlineContour FillContours FontSlant NotSlantedSlantedFontFaceFontHersheySimplexFontHersheyPlainFontHersheyDuplexFontHersheyComplexFontHersheyTriplexFontHersheyComplexSmallFontHersheyScriptSimplexFontHersheyScriptComplexFont _fontFace _fontSlant _fontScaleLineType LineType_8 LineType_4 LineType_AA arrowedLinecircleellipsefillConvexPolyfillPoly polylinesline getTextSizeputText rectangle drawContoursmarker$fShowLineType$fEnumLineType$fBoundedLineType$fShowFontFace$fEnumFontFace$fBoundedFontFace$fShowFontSlant $fShowFont LineSegmentlineSegmentStartlineSegmentStop"GoodFeaturesToTrackDetectionMethodHarrisDetectorCornerMinEigenValCircle circleCenter circleRadius CannyNorm CannyNormL1 CannyNormL2cannygoodFeaturesToTrack houghCircles houghLinesP$fIsVecLineSegmentdepth$fShowCannyNorm $fEqCannyNorm $fShowCircle($fShowGoodFeaturesToTrackDetectionMethod&$fEqGoodFeaturesToTrackDetectionMethod$fFoldableLineSegment$fFunctorLineSegment$fTraversableLineSegment$fShowLineSegmentFloodFillOperationFlagsfloodFillConnectivityfloodFillMaskFillColorfloodFillFixedRangefloodFillMaskOnlycvtColor floodFilldefaultFloodFillOperationFlags threshold watershedgrabCutinRangeMatchTemplateNormalisationMatchTemplateNotNormedMatchTemplateNormedMatchTemplateMethodMatchTemplateSqDiffMatchTemplateCCorrMatchTemplateCCoeff matchTemplate$fShowMatchTemplateMethod $fShowMatchTemplateNormalisation$fEqMatchTemplateNormalisationContour contourPointscontourChildrenContourApproximationMethodContourApproximationNoneContourApproximationSimpleContourApproximationTC89L1ContourApproximationTC89KCOSContourRetrievalModeContourRetrievalExternalContourRetrievalListContourRetrievalCCompContourRetrievalTreeContourAreaOrientedContourAreaAbsoluteValue contourAreapointPolygonTest findContours approxPolyDP arcLength minAreaRect $fShowContour BorderModeBorderConstantBorderReplicate BorderReflect BorderWrapBorderReflect101BorderTransparentBorderIsolatedInterpolationMethod InterNearest InterLinear InterCubic InterArea InterLanczos4$fShowInterpolationMethodToHElemstoHElemsHElems HElems_8U HElems_8S HElems_16U HElems_16S HElems_32S HElems_32F HElems_64FHElems_USRTYPE1HMathmShape hmChannelshmElems hElemsDepth hElemsLength matToHMat hMatToMat ResizeAbsRel ResizeAbs ResizeRelresize warpAffinewarpPerspectiveinvertAffineTransformgetPerspectiveTransformgetRotationMatrix2Dremap undistort$fShowResizeAbsRelMorphOperation MorphOpen MorphClose MorphGradient MorphTopHat MorphBlackHat MorphShape MorphRect MorphEllipse MorphCross laplacian medianBlurblur gaussianBlurerodefilter2Ddilate morphologyExgetStructuringElement$fFromJSONHElems$fToJSONHElems $fFromJSONMat $fToJSONMat$fFromJSONSize $fToJSONSize$fFromJSONSize0 $fToJSONSize0$fFromJSONPoint $fToJSONPoint$fFromJSONPoint0$fToJSONPoint0$fFromJSONPoint1$fToJSONPoint1$fFromJSONPoint2$fToJSONPoint2$fFromJSONPoint3$fToJSONPoint3$fFromJSONPoint4$fToJSONPoint4 $fFromJSONJ $fFromJSONJ0 $fToJSONJ $fToJSONJ0$fFromJSONHMat $fToJSONHMatBackgroundSubtractorMOG2BackgroundSubtractorKNNBackgroundSubtractor bgSubApplygetBackgroundImagenewBackgroundSubtractorKNNnewBackgroundSubtractorMOG2.$fBackgroundSubtractorBackgroundSubtractorMOG2-$fBackgroundSubtractorBackgroundSubtractorKNN#$fAlgorithmBackgroundSubtractorMOG2"$fAlgorithmBackgroundSubtractorKNN!$fFromPtrBackgroundSubtractorMOG2 $fFromPtrBackgroundSubtractorKNN!$fWithPtrBackgroundSubtractorMOG2 $fWithPtrBackgroundSubtractorKNN VideoFileSink vfsFilePath vfsFourCCvfsFps vfsFrameDimsVideoWriterSinkVideoFileSink' VideoWritervideoWriterOpenvideoWriterReleasevideoWriterIsOpenedvideoWriterWrite$fFromPtrVideoWriter$fWithPtrVideoWriter FlipDirectionFlipVerticallyFlipHorizontallyFlipBoth matScalarAdd matScalarMultmatAbs matAbsDiffmatAdd matSubtractmatAddWeighted matScaleAddmatMaxmatScalarCompare bitwiseNot bitwiseAnd bitwiseOr bitwiseXormatMergematSplitmatChannelMapM minMaxLocnormnormDiff normalizematSummatSumM meanStdDevmatFlip matTransposehconcatvconcat$fShowFlipDirection$fEqFlipDirectionFindHomographyParams fhpMethodfhpRansacReprojThreshold fhpMaxIters fhpConfidenceFindHomographyMethodFindHomographyMethod_0FindHomographyMethod_RANSACFindHomographyMethod_LMEDSFindHomographyMethod_RHO WhichImageImage1Image2FundamentalMatMethod FM_7Point FM_8Point FM_RansacFM_LmedsfindFundamentalMatfindHomographycomputeCorrespondEpilines$fDefaultFindHomographyParams$fShowFundamentalMatMethod$fEqFundamentalMatMethod$fShowWhichImage$fEqWhichImage$fShowFindHomographyMethod$fShowFindHomographyParamsFilterMat2D PixelChannels PixelDepth fromImagetoImageisoJuicy$fStorablePixelYA16$fStorablePixelYA8$fStorablePixelRGBA16$fStorablePixelRGBA8$fStorablePixelRGBF$fStorablePixelRGB16$fStorablePixelRGB8baseGHC.BaseMaybePrivateIsStaticc'CAP_PROP_POS_MSECc'CAP_PROP_POS_FRAMESc'CAP_PROP_POS_AVI_RATIOc'CAP_PROP_FRAME_WIDTHc'CAP_PROP_FRAME_HEIGHTc'CAP_PROP_FPSc'CAP_PROP_FOURCCc'CAP_PROP_FRAME_COUNTc'CAP_PROP_FORMATc'CAP_PROP_MODEc'CAP_PROP_BRIGHTNESSc'CAP_PROP_CONTRASTc'CAP_PROP_SATURATIONc'CAP_PROP_HUEc'CAP_PROP_GAINc'CAP_PROP_EXPOSUREc'CAP_PROP_CONVERT_RGBc'CAP_PROP_WHITE_BALANCE_BLUE_Uc'CAP_PROP_RECTIFICATIONc'CAP_PROP_MONOCHROMEc'CAP_PROP_SHARPNESSc'CAP_PROP_AUTO_EXPOSUREc'CAP_PROP_GAMMAc'CAP_PROP_TEMPERATUREc'CAP_PROP_TRIGGERc'CAP_PROP_TRIGGER_DELAYc'CAP_PROP_WHITE_BALANCE_RED_Vc'CAP_PROP_ZOOMc'CAP_PROP_FOCUSc'CAP_PROP_GUIDc'CAP_PROP_ISO_SPEEDc'CAP_PROP_BACKLIGHTc'CAP_PROP_PANc'CAP_PROP_TILTc'CAP_PROP_ROLLc'CAP_PROP_IRISc'CAP_PROP_SETTINGSc'CAP_PROP_BUFFERSIZEc'CAP_PROP_AUTOFOCUS c'CAP_ANY c'CAP_VFW c'CAP_V4L c'CAP_V4L2c'CAP_FIREWIREc'CAP_FIREWAREc'CAP_IEEE1394 c'CAP_DC1394 c'CAP_CMU1394c'CAP_QT c'CAP_UNICAP c'CAP_DSHOW c'CAP_PVAPI c'CAP_OPENNIc'CAP_OPENNI_ASUS c'CAP_ANDROID c'CAP_XIAPIc'CAP_AVFOUNDATIONc'CAP_GIGANETIX c'CAP_MSMF c'CAP_WINRTc'CAP_INTELPERC c'CAP_OPENNI2c'CAP_OPENNI2_ASUS c'CAP_GPHOTO2c'CAP_GSTREAMER c'CAP_FFMPEG c'CAP_IMAGESmarshalCapturePropertiesmarshalVideoCaptureAPI $fShowFourCC$fIsStringFourCC c'INPAINT_NSc'INPAINT_TELEAc'NORMAL_CLONE c'MIXED_CLONEc'MONOCHROME_TRANSFERc'RECURS_FILTERc'NORMCONV_FILTERunMutPlusTwoWidthAndHeightPlusTwoc'IMREAD_UNCHANGEDc'IMREAD_GRAYSCALEc'IMREAD_COLORc'IMREAD_ANYDEPTHc'IMREAD_ANYCOLORc'IMREAD_LOAD_GDALmarshalImreadModec'IMWRITE_JPEG_QUALITYc'IMWRITE_JPEG_PROGRESSIVEc'IMWRITE_JPEG_OPTIMIZEc'IMWRITE_JPEG_RST_INTERVALc'IMWRITE_JPEG_LUMA_QUALITYc'IMWRITE_JPEG_CHROMA_QUALITYmarshalJpegParamsc'IMWRITE_PNG_STRATEGY_DEFAULTc'IMWRITE_PNG_STRATEGY_FILTERED#c'IMWRITE_PNG_STRATEGY_HUFFMAN_ONLYc'IMWRITE_PNG_STRATEGY_RLEc'IMWRITE_PNG_STRATEGY_FIXEDmarshalPngStrategyc'IMWRITE_PNG_COMPRESSIONc'IMWRITE_PNG_STRATEGYc'IMWRITE_PNG_BILEVELmarshalPngParamsc'IMWRITE_PXM_BINARYc'IMWRITE_WEBP_QUALITYmarshalOutputFormat StaticDepthT$fToDepthDSproxy$fToDepthDSproxy0$fToDepthDSproxy1$fToDepthDSproxy2$fToDepthDSproxy3$fToDepthDSproxy4$fToDepthDSproxy5$fToDepthDSproxy6$fToDepthDSDepth$fToDepthproxy$fToDepthproxy0$fToDepthproxy1$fToDepthproxy2$fToDepthproxy3$fToDepthproxy4$fToDepthproxy5$fToDepthDepthc'CV_8Uc'CV_8Sc'CV_16Uc'CV_16Sc'CV_32Sc'CV_32Fc'CV_64F c'CV_USRTYPE1 marshalDepthunmarshalDepth c'CV_CN_SHIFT marshalFlags c'CV_CN_MAXc'CV_MAT_DEPTH_MASKunmarshalFlagsc'sizeof_Point2ic'sizeof_Point2fc'sizeof_Point2dc'sizeof_Point3ic'sizeof_Point3fc'sizeof_Point3dc'sizeof_Size2ic'sizeof_Size2fc'sizeof_Scalarc'sizeof_Rangec'sizeof_KeyPointc'sizeof_DMatch c'sizeof_Matc'TERMCRITERIA_COUNTc'TERMCRITERIA_EPSc'CMP_EQc'CMP_GTc'CMP_GEc'CMP_LTc'CMP_LEc'CMP_NEmarshalCmpType c'NORM_INF c'NORM_L1 c'NORM_L2 c'NORM_L2SQRc'NORM_HAMMINGc'NORM_HAMMING2 c'NORM_MINMAXc'NORM_RELATIVEmarshalNormTypec'CV_FM_7POINTc'CV_FM_8POINTc'CV_FM_RANSAC c'CV_FM_LMEDSc'LMEDSc'RANSACc'RHOwithPtrCcSizeOfC'TrackbarCallbackC'MouseCallbackC'CascadeClassifier C'VideoWriterC'VideoCaptureC'Ptr_BackgroundSubtractorMOG2C'Ptr_BackgroundSubtractorKNNC'FlannBasedMatcher C'BFMatcherC'DescriptorMatcherC'Ptr_SimpleBlobDetector C'Ptr_ORBC'DMatch C'KeyPointC'MatC'ScalarC'RangeC'TermCriteria C'RotatedRectC'CvCppException $fWithPtrMut$fWithPtrMaybeNothingGHC.PtrnullPtrfromPtrC'Rect2dC'Rect2fC'Rect2iC'Size2dC'Size2fC'Size2i C'Point3d C'Point3f C'Point3i C'Point2d C'Point2f C'Point2iC'Vec4dC'Vec4fC'Vec4iC'Vec3dC'Vec3fC'Vec3iC'Vec2dC'Vec2fC'Vec2i C'Matx66d C'Matx66f C'Matx61d C'Matx61f C'Matx51d C'Matx51f C'Matx44d C'Matx44f C'Matx43d C'Matx43f C'Matx41d C'Matx41f C'Matx34d C'Matx34f C'Matx33d C'Matx33f C'Matx32d C'Matx32f C'Matx31d C'Matx31f C'Matx23d C'Matx23f C'Matx22d C'Matx22f C'Matx21d C'Matx21f C'Matx16d C'Matx16f C'Matx14d C'Matx14f C'Matx13d C'Matx13f C'Matx12d C'Matx12fC'RectC'SizeC'PointC'VecC'MatxtoCFloat fromCFloat toCDouble fromCDouble$fCSizeOfTYPEC'Mat$fCSizeOfTYPEC'Range$fCSizeOfTYPEC'Scalar$fCSizeOfTYPEC'Size$fCSizeOfTYPEC'Size0$fCSizeOfTYPEC'Point$fCSizeOfTYPEC'Point0$fCSizeOfTYPEC'Point1$fCSizeOfTYPEC'Point2$fCSizeOfTYPEC'Point3$fCSizeOfTYPEC'Point4unMatx $fWithPtrMatxunPoint $fShowPoint $fShowPoint0$fWithPtrPointunSize $fShowSize $fWithPtrSizeunVec $fShowVec $fShowVec0 $fShowVec1 $fWithPtrVec placementNewplacementDeletemkPlacementNewInstance openCvCtx+inline-c-cpp-0.1.0.0-FH5fxXz6CgV26oX5aV5JDwLanguage.C.Inline.CppcppCtx'inline-c-0.5.6.1-ALNIYNFiTSt6GLSjOqBNq3Language.C.Inline.ContextbsCtxvecCtx ctxTypesTablectxAntiQuotersopenCvTypesTable objFromPtr mkMatxType mkPointType mkSizeType mkVecType inline_c_ffi_6989586621679257674 inline_c_ffi_6989586621679257684unCvCppExceptionhandleCvExceptioncvExcept cvExceptU cvExceptWrap runCvExceptSTunsafeCvExceptunsafeWrapException$fShowCvCppException$fFromPtrCvCppException$fWithPtrCvCppException$fExceptionCvException inline_c_ffi_6989586621679273587 inline_c_ffi_6989586621679273595 inline_c_ffi_6989586621679273607 inline_c_ffi_6989586621679273615 inline_c_ffi_6989586621679273620 inline_c_ffi_6989586621679273903 inline_c_ffi_6989586621679273911 inline_c_ffi_6989586621679273923 inline_c_ffi_6989586621679273931 inline_c_ffi_6989586621679273936 inline_c_ffi_6989586621679274214 inline_c_ffi_6989586621679274222 inline_c_ffi_6989586621679274234 inline_c_ffi_6989586621679274242 inline_c_ffi_6989586621679274247 inline_c_ffi_6989586621679274525 inline_c_ffi_6989586621679274536 inline_c_ffi_6989586621679274551 inline_c_ffi_6989586621679274559 inline_c_ffi_6989586621679274564 inline_c_ffi_6989586621679274883 inline_c_ffi_6989586621679274894 inline_c_ffi_6989586621679274909 inline_c_ffi_6989586621679274917 inline_c_ffi_6989586621679274922 inline_c_ffi_6989586621679275241 inline_c_ffi_6989586621679275252 inline_c_ffi_6989586621679275267 inline_c_ffi_6989586621679275275 inline_c_ffi_6989586621679275280 inline_c_ffi_6989586621679275599 inline_c_ffi_6989586621679275613 inline_c_ffi_6989586621679275631 inline_c_ffi_6989586621679275639 inline_c_ffi_6989586621679275644 inline_c_ffi_6989586621679276004 inline_c_ffi_6989586621679276018 inline_c_ffi_6989586621679276036 inline_c_ffi_6989586621679276044 inline_c_ffi_6989586621679276049 inline_c_ffi_6989586621679276409 inline_c_ffi_6989586621679276423 inline_c_ffi_6989586621679276441 inline_c_ffi_6989586621679276449 inline_c_ffi_6989586621679276454 inline_c_ffi_6989586621679299383 inline_c_ffi_6989586621679299391 inline_c_ffi_6989586621679299403 inline_c_ffi_6989586621679299411 inline_c_ffi_6989586621679299416 inline_c_ffi_6989586621679299685 inline_c_ffi_6989586621679299693 inline_c_ffi_6989586621679299705 inline_c_ffi_6989586621679299713 inline_c_ffi_6989586621679299718 inline_c_ffi_6989586621679299987 inline_c_ffi_6989586621679299995 inline_c_ffi_6989586621679300007 inline_c_ffi_6989586621679300015 inline_c_ffi_6989586621679300020 inline_c_ffi_6989586621679307064 inline_c_ffi_6989586621679307072 inline_c_ffi_6989586621679307084 inline_c_ffi_6989586621679307092 inline_c_ffi_6989586621679307097 inline_c_ffi_6989586621679307376 inline_c_ffi_6989586621679307384 inline_c_ffi_6989586621679307396 inline_c_ffi_6989586621679307404 inline_c_ffi_6989586621679307409 inline_c_ffi_6989586621679307688 inline_c_ffi_6989586621679307696 inline_c_ffi_6989586621679307708 inline_c_ffi_6989586621679307716 inline_c_ffi_6989586621679307721 inline_c_ffi_6989586621679308000 inline_c_ffi_6989586621679308011 inline_c_ffi_6989586621679308026 inline_c_ffi_6989586621679308034 inline_c_ffi_6989586621679308039 inline_c_ffi_6989586621679308359 inline_c_ffi_6989586621679308370 inline_c_ffi_6989586621679308385 inline_c_ffi_6989586621679308393 inline_c_ffi_6989586621679308398 inline_c_ffi_6989586621679308718 inline_c_ffi_6989586621679308729 inline_c_ffi_6989586621679308744 inline_c_ffi_6989586621679308752 inline_c_ffi_6989586621679308757 newWholeRange withArrayPtrForeign.ForeignPtr.ImpwithForeignPtr inline_c_ffi_6989586621679323310 inline_c_ffi_6989586621679323327 inline_c_ffi_6989586621679323365 inline_c_ffi_6989586621679323377 inline_c_ffi_6989586621679323383 inline_c_ffi_6989586621679323430 inline_c_ffi_6989586621679324464 inline_c_ffi_6989586621679324469 inline_c_ffi_6989586621679324478 inline_c_ffi_6989586621679324487 inline_c_ffi_6989586621679324496 inline_c_ffi_6989586621679324505unRangeunTermCriteria unRotatedRectunScalar newScalarnewRotatedRectnewTermCriterianewRange withPolygons$fWithPtrRange$fWithPtrTermCriteria$fWithPtrRotatedRect$fWithPtrScalar$fFromScalarV4$fFromScalarV40$fFromScalarScalar $fToScalarV4 $fToScalarV40$fToScalarScalar$fFromPtrRange$fFromPtrTermCriteria$fFromPtrRotatedRect$fFromPtrScalar$fPlacementNewC'ScalarkeepMatAliveDuringGHC.ForeignPtr ForeignPtrPtr dimPositions $fToShape::: $fToShapeZ%vector-0.11.0.0-LMwQhhnXj8U3T5Bm1JFxG Data.VectorVector$fToShapeProxy$fToShapeProxy0 $fToShape[]$fToShapeVector inline_c_ffi_6989586621679343075 inline_c_ffi_6989586621679343104 inline_c_ffi_6989586621679343130 inline_c_ffi_6989586621679343152 inline_c_ffi_6989586621679343183 inline_c_ffi_6989586621679343215 inline_c_ffi_6989586621679346001 inline_c_ffi_6989586621679346006unMat newEmptyMatnewMat withVector withMatDatamatElemAddress cloneMatIO$fToShapeDSProxy$fToShapeDSproxy$fFreezeThawMat $fFromPtrMat $fWithPtrMat$fPlacementNewC'Mat inline_c_ffi_6989586621679370734 inline_c_ffi_6989586621679370762 inline_c_ffi_6989586621679370801 inline_c_ffi_6989586621679370827 inline_c_ffi_6989586621679370845marshalInpaintingMethodunRect$fFromJSONRect $fToJSONRect$fFromJSONHRect $fToJSONHRect $fShowRect $fWithPtrRect mkRectType inline_c_ffi_6989586621679427717 inline_c_ffi_6989586621679427723 inline_c_ffi_6989586621679427729 inline_c_ffi_6989586621679427735 inline_c_ffi_6989586621679427741 inline_c_ffi_6989586621679427751 inline_c_ffi_6989586621679427767 inline_c_ffi_6989586621679427787 inline_c_ffi_6989586621679427792 inline_c_ffi_6989586621679428299 inline_c_ffi_6989586621679428305 inline_c_ffi_6989586621679428311 inline_c_ffi_6989586621679428317 inline_c_ffi_6989586621679428323 inline_c_ffi_6989586621679428333 inline_c_ffi_6989586621679428349 inline_c_ffi_6989586621679428369 inline_c_ffi_6989586621679428374 inline_c_ffi_6989586621679428865 inline_c_ffi_6989586621679428871 inline_c_ffi_6989586621679428877 inline_c_ffi_6989586621679428883 inline_c_ffi_6989586621679428889 inline_c_ffi_6989586621679428899 inline_c_ffi_6989586621679428915 inline_c_ffi_6989586621679428935 inline_c_ffi_6989586621679428940c'THRESH_BINARYc'THRESH_BINARY_INVc'THRESH_TRUNCc'THRESH_TOZEROc'THRESH_TOZERO_INVmarshalThreshType c'THRESH_OTSUc'THRESH_TRIANGLEmarshalThreshValuec'FLOODFILL_FIXED_RANGEc'FLOODFILL_MASK_ONLYc'GC_INIT_WITH_RECTc'GC_INIT_WITH_MASK c'GC_EVALmarshalGrabCutOperationModemarshalGrabCutOperationModeRectc'COLOR_BGR2BGRAc'COLOR_RGB2RGBAc'COLOR_BGRA2BGRc'COLOR_RGBA2RGBc'COLOR_BGR2RGBAc'COLOR_RGB2BGRAc'COLOR_RGBA2BGRc'COLOR_BGRA2RGBc'COLOR_BGR2RGBc'COLOR_RGB2BGRc'COLOR_BGRA2RGBAc'COLOR_RGBA2BGRAc'COLOR_BGR2GRAYc'COLOR_RGB2GRAYc'COLOR_GRAY2BGRc'COLOR_GRAY2RGBc'COLOR_GRAY2BGRAc'COLOR_GRAY2RGBAc'COLOR_BGRA2GRAYc'COLOR_RGBA2GRAYc'COLOR_BGR2BGR565c'COLOR_RGB2BGR565c'COLOR_BGR5652BGRc'COLOR_BGR5652RGBc'COLOR_BGRA2BGR565c'COLOR_RGBA2BGR565c'COLOR_BGR5652BGRAc'COLOR_BGR5652RGBAc'COLOR_GRAY2BGR565c'COLOR_BGR5652GRAYc'COLOR_BGR2BGR555c'COLOR_RGB2BGR555c'COLOR_BGR5552BGRc'COLOR_BGR5552RGBc'COLOR_BGRA2BGR555c'COLOR_RGBA2BGR555c'COLOR_BGR5552BGRAc'COLOR_BGR5552RGBAc'COLOR_GRAY2BGR555c'COLOR_BGR5552GRAYc'COLOR_BGR2XYZc'COLOR_RGB2XYZc'COLOR_XYZ2BGRc'COLOR_XYZ2RGBc'COLOR_BGR2YCrCbc'COLOR_RGB2YCrCbc'COLOR_YCrCb2BGRc'COLOR_YCrCb2RGBc'COLOR_BGR2HSVc'COLOR_RGB2HSVc'COLOR_BGR2Labc'COLOR_RGB2Labc'COLOR_BGR2Luvc'COLOR_RGB2Luvc'COLOR_BGR2HLSc'COLOR_RGB2HLSc'COLOR_HSV2BGRc'COLOR_HSV2RGBc'COLOR_Lab2BGRc'COLOR_Lab2RGBc'COLOR_Luv2BGRc'COLOR_Luv2RGBc'COLOR_HLS2BGRc'COLOR_HLS2RGBc'COLOR_BGR2HSV_FULLc'COLOR_RGB2HSV_FULLc'COLOR_BGR2HLS_FULLc'COLOR_RGB2HLS_FULLc'COLOR_HSV2BGR_FULLc'COLOR_HSV2RGB_FULLc'COLOR_HLS2BGR_FULLc'COLOR_HLS2RGB_FULLc'COLOR_LBGR2Labc'COLOR_LRGB2Labc'COLOR_LBGR2Luvc'COLOR_LRGB2Luvc'COLOR_Lab2LBGRc'COLOR_Lab2LRGBc'COLOR_Luv2LBGRc'COLOR_Luv2LRGBc'COLOR_BGR2YUVc'COLOR_RGB2YUVc'COLOR_YUV2BGRc'COLOR_YUV2RGBc'COLOR_YUV2RGB_NV12c'COLOR_YUV2BGR_NV12c'COLOR_YUV2RGB_NV21c'COLOR_YUV2BGR_NV21c'COLOR_YUV420sp2RGBc'COLOR_YUV420sp2BGRc'COLOR_YUV2RGBA_NV12c'COLOR_YUV2BGRA_NV12c'COLOR_YUV2RGBA_NV21c'COLOR_YUV2BGRA_NV21c'COLOR_YUV420sp2RGBAc'COLOR_YUV420sp2BGRAc'COLOR_YUV2RGB_YV12c'COLOR_YUV2BGR_YV12c'COLOR_YUV2RGB_IYUVc'COLOR_YUV2BGR_IYUVc'COLOR_YUV2RGB_I420c'COLOR_YUV2BGR_I420c'COLOR_YUV420p2RGBc'COLOR_YUV420p2BGRc'COLOR_YUV2RGBA_YV12c'COLOR_YUV2BGRA_YV12c'COLOR_YUV2RGBA_IYUVc'COLOR_YUV2BGRA_IYUVc'COLOR_YUV2RGBA_I420c'COLOR_YUV2BGRA_I420c'COLOR_YUV420p2RGBAc'COLOR_YUV420p2BGRAc'COLOR_YUV2GRAY_420c'COLOR_YUV2GRAY_NV21c'COLOR_YUV2GRAY_NV12c'COLOR_YUV2GRAY_YV12c'COLOR_YUV2GRAY_IYUVc'COLOR_YUV2GRAY_I420c'COLOR_YUV420sp2GRAYc'COLOR_YUV420p2GRAYc'COLOR_YUV2RGB_UYVYc'COLOR_YUV2BGR_UYVYc'COLOR_YUV2RGB_Y422c'COLOR_YUV2BGR_Y422c'COLOR_YUV2RGB_UYNVc'COLOR_YUV2BGR_UYNVc'COLOR_YUV2RGBA_UYVYc'COLOR_YUV2BGRA_UYVYc'COLOR_YUV2RGBA_Y422c'COLOR_YUV2BGRA_Y422c'COLOR_YUV2RGBA_UYNVc'COLOR_YUV2BGRA_UYNVc'COLOR_YUV2RGB_YUY2c'COLOR_YUV2BGR_YUY2c'COLOR_YUV2RGB_YVYUc'COLOR_YUV2BGR_YVYUc'COLOR_YUV2RGB_YUYVc'COLOR_YUV2BGR_YUYVc'COLOR_YUV2RGB_YUNVc'COLOR_YUV2BGR_YUNVc'COLOR_YUV2RGBA_YUY2c'COLOR_YUV2BGRA_YUY2c'COLOR_YUV2RGBA_YVYUc'COLOR_YUV2BGRA_YVYUc'COLOR_YUV2RGBA_YUYVc'COLOR_YUV2BGRA_YUYVc'COLOR_YUV2RGBA_YUNVc'COLOR_YUV2BGRA_YUNVc'COLOR_YUV2GRAY_UYVYc'COLOR_YUV2GRAY_YUY2c'COLOR_YUV2GRAY_Y422c'COLOR_YUV2GRAY_UYNVc'COLOR_YUV2GRAY_YVYUc'COLOR_YUV2GRAY_YUYVc'COLOR_YUV2GRAY_YUNVc'COLOR_RGBA2mRGBAc'COLOR_mRGBA2RGBAc'COLOR_RGB2YUV_I420c'COLOR_BGR2YUV_I420c'COLOR_RGB2YUV_IYUVc'COLOR_BGR2YUV_IYUVc'COLOR_RGBA2YUV_I420c'COLOR_BGRA2YUV_I420c'COLOR_RGBA2YUV_IYUVc'COLOR_BGRA2YUV_IYUVc'COLOR_RGB2YUV_YV12c'COLOR_BGR2YUV_YV12c'COLOR_RGBA2YUV_YV12c'COLOR_BGRA2YUV_YV12c'COLOR_BayerBG2BGRc'COLOR_BayerGB2BGRc'COLOR_BayerRG2BGRc'COLOR_BayerGR2BGRc'COLOR_BayerBG2RGBc'COLOR_BayerGB2RGBc'COLOR_BayerRG2RGBc'COLOR_BayerGR2RGBc'COLOR_BayerBG2GRAYc'COLOR_BayerGB2GRAYc'COLOR_BayerRG2GRAYc'COLOR_BayerGR2GRAYc'COLOR_BayerBG2BGR_VNGc'COLOR_BayerGB2BGR_VNGc'COLOR_BayerRG2BGR_VNGc'COLOR_BayerGR2BGR_VNGc'COLOR_BayerBG2RGB_VNGc'COLOR_BayerGB2RGB_VNGc'COLOR_BayerRG2RGB_VNGc'COLOR_BayerGR2RGB_VNGc'COLOR_BayerBG2BGR_EAc'COLOR_BayerGB2BGR_EAc'COLOR_BayerRG2BGR_EAc'COLOR_BayerGR2BGR_EAc'COLOR_BayerBG2RGB_EAc'COLOR_BayerGB2RGB_EAc'COLOR_BayerRG2RGB_EAc'COLOR_BayerGR2RGB_EAcolorConversionCode$fColorCodeMatchesChannelscodeS$fColorCodeMatchesChannelscodeD$fColorConversionBayerGRRGB_EA$fColorConversionBayerRGRGB_EA$fColorConversionBayerGBRGB_EA$fColorConversionBayerBGRGB_EA$fColorConversionBayerGRBGR_EA$fColorConversionBayerRGBGR_EA$fColorConversionBayerGBBGR_EA$fColorConversionBayerBGBGR_EA$fColorConversionBayerGRRGB_VNG$fColorConversionBayerRGRGB_VNG$fColorConversionBayerGBRGB_VNG$fColorConversionBayerBGRGB_VNG$fColorConversionBayerGRBGR_VNG$fColorConversionBayerRGBGR_VNG$fColorConversionBayerGBBGR_VNG$fColorConversionBayerBGBGR_VNG$fColorConversionBayerGRGRAY$fColorConversionBayerRGGRAY$fColorConversionBayerGBGRAY$fColorConversionBayerBGGRAY$fColorConversionBayerGRRGB$fColorConversionBayerRGRGB$fColorConversionBayerGBRGB$fColorConversionBayerBGRGB$fColorConversionBayerGRBGR$fColorConversionBayerRGBGR$fColorConversionBayerGBBGR$fColorConversionBayerBGBGR$fColorConversionBGRAYUV_YV12$fColorConversionRGBAYUV_YV12$fColorConversionBGRYUV_YV12$fColorConversionRGBYUV_YV12$fColorConversionBGRAYUV_IYUV$fColorConversionRGBAYUV_IYUV$fColorConversionBGRAYUV_I420$fColorConversionRGBAYUV_I420$fColorConversionBGRYUV_IYUV$fColorConversionRGBYUV_IYUV$fColorConversionBGRYUV_I420$fColorConversionRGBYUV_I420$fColorConversionMRGBARGBA$fColorConversionRGBAMRGBA$fColorConversionYUVGRAY_YUNV$fColorConversionYUVGRAY_YUYV$fColorConversionYUVGRAY_YVYU$fColorConversionYUVGRAY_UYNV$fColorConversionYUVGRAY_Y422$fColorConversionYUVGRAY_YUY2$fColorConversionYUVGRAY_UYVY$fColorConversionYUVBGRA_YUNV$fColorConversionYUVRGBA_YUNV$fColorConversionYUVBGRA_YUYV$fColorConversionYUVRGBA_YUYV$fColorConversionYUVBGRA_YVYU$fColorConversionYUVRGBA_YVYU$fColorConversionYUVBGRA_YUY2$fColorConversionYUVRGBA_YUY2$fColorConversionYUVBGR_YUNV$fColorConversionYUVRGB_YUNV$fColorConversionYUVBGR_YUYV$fColorConversionYUVRGB_YUYV$fColorConversionYUVBGR_YVYU$fColorConversionYUVRGB_YVYU$fColorConversionYUVBGR_YUY2$fColorConversionYUVRGB_YUY2$fColorConversionYUVBGRA_UYNV$fColorConversionYUVRGBA_UYNV$fColorConversionYUVBGRA_Y422$fColorConversionYUVRGBA_Y422$fColorConversionYUVBGRA_UYVY$fColorConversionYUVRGBA_UYVY$fColorConversionYUVBGR_UYNV$fColorConversionYUVRGB_UYNV$fColorConversionYUVBGR_Y422$fColorConversionYUVRGB_Y422$fColorConversionYUVBGR_UYVY$fColorConversionYUVRGB_UYVY$fColorConversionYUV420pGRAY$fColorConversionYUV420spGRAY$fColorConversionYUVGRAY_I420$fColorConversionYUVGRAY_IYUV$fColorConversionYUVGRAY_YV12$fColorConversionYUVGRAY_NV12$fColorConversionYUVGRAY_NV21$fColorConversionYUVGRAY_420$fColorConversionYUV420pBGRA$fColorConversionYUV420pRGBA$fColorConversionYUVBGRA_I420$fColorConversionYUVRGBA_I420$fColorConversionYUVBGRA_IYUV$fColorConversionYUVRGBA_IYUV$fColorConversionYUVBGRA_YV12$fColorConversionYUVRGBA_YV12$fColorConversionYUV420pBGR$fColorConversionYUV420pRGB$fColorConversionYUVBGR_I420$fColorConversionYUVRGB_I420$fColorConversionYUVBGR_IYUV$fColorConversionYUVRGB_IYUV$fColorConversionYUVBGR_YV12$fColorConversionYUVRGB_YV12$fColorConversionYUV420spBGRA$fColorConversionYUV420spRGBA$fColorConversionYUVBGRA_NV21$fColorConversionYUVRGBA_NV21$fColorConversionYUVBGRA_NV12$fColorConversionYUVRGBA_NV12$fColorConversionYUV420spBGR$fColorConversionYUV420spRGB$fColorConversionYUVBGR_NV21$fColorConversionYUVRGB_NV21$fColorConversionYUVBGR_NV12$fColorConversionYUVRGB_NV12$fColorConversionYUVRGB$fColorConversionYUVBGR$fColorConversionRGBYUV$fColorConversionBGRYUV$fColorConversionLuvLRGB$fColorConversionLuvLBGR$fColorConversionLabLRGB$fColorConversionLabLBGR$fColorConversionLRGBLuv$fColorConversionLBGRLuv$fColorConversionLRGBLab$fColorConversionLBGRLab$fColorConversionHLSRGB_FULL$fColorConversionHLSBGR_FULL$fColorConversionHSVRGB_FULL$fColorConversionHSVBGR_FULL$fColorConversionRGBHLS_FULL$fColorConversionBGRHLS_FULL$fColorConversionRGBHSV_FULL$fColorConversionBGRHSV_FULL$fColorConversionHLSRGB$fColorConversionHLSBGR$fColorConversionLuvRGB$fColorConversionLuvBGR$fColorConversionLabRGB$fColorConversionLabBGR$fColorConversionHSVRGB$fColorConversionHSVBGR$fColorConversionRGBHLS$fColorConversionBGRHLS$fColorConversionRGBLuv$fColorConversionBGRLuv$fColorConversionRGBLab$fColorConversionBGRLab$fColorConversionRGBHSV$fColorConversionBGRHSV$fColorConversionYCrCbRGB$fColorConversionYCrCbBGR$fColorConversionRGBYCrCb$fColorConversionBGRYCrCb$fColorConversionXYZRGB$fColorConversionXYZBGR$fColorConversionRGBXYZ$fColorConversionBGRXYZ$fColorConversionBGR555GRAY$fColorConversionGRAYBGR555$fColorConversionBGR555RGBA$fColorConversionBGR555BGRA$fColorConversionRGBABGR555$fColorConversionBGRABGR555$fColorConversionBGR555RGB$fColorConversionBGR555BGR$fColorConversionRGBBGR555$fColorConversionBGRBGR555$fColorConversionBGR565GRAY$fColorConversionGRAYBGR565$fColorConversionBGR565RGBA$fColorConversionBGR565BGRA$fColorConversionRGBABGR565$fColorConversionBGRABGR565$fColorConversionBGR565RGB$fColorConversionBGR565BGR$fColorConversionRGBBGR565$fColorConversionBGRBGR565$fColorConversionRGBAGRAY$fColorConversionBGRAGRAY$fColorConversionGRAYRGBA$fColorConversionGRAYBGRA$fColorConversionGRAYRGB$fColorConversionGRAYBGR$fColorConversionRGBGRAY$fColorConversionBGRGRAY$fColorConversionRGBABGRA$fColorConversionBGRARGBA$fColorConversionRGBBGR$fColorConversionBGRRGB$fColorConversionBGRARGB$fColorConversionRGBABGR$fColorConversionRGBBGRA$fColorConversionBGRRGBA$fColorConversionRGBARGB$fColorConversionBGRABGR$fColorConversionRGBRGBA$fColorConversionBGRBGRA inline_c_ffi_6989586621679486296 inline_c_ffi_6989586621679486304 inline_c_ffi_6989586621679486312 inline_c_ffi_6989586621679486317 inline_c_ffi_6989586621679486500 inline_c_ffi_6989586621679486508 inline_c_ffi_6989586621679486516 inline_c_ffi_6989586621679486521 inline_c_ffi_6989586621679486704 inline_c_ffi_6989586621679486715 inline_c_ffi_6989586621679486723 inline_c_ffi_6989586621679486728 inline_c_ffi_6989586621679486929 inline_c_ffi_6989586621679486940 inline_c_ffi_6989586621679486948 inline_c_ffi_6989586621679486953 inline_c_ffi_6989586621679487154 inline_c_ffi_6989586621679487168 inline_c_ffi_6989586621679487176 inline_c_ffi_6989586621679487181 inline_c_ffi_6989586621679487400 inline_c_ffi_6989586621679487414 inline_c_ffi_6989586621679487422 inline_c_ffi_6989586621679487427 inline_c_ffi_6989586621679487646 inline_c_ffi_6989586621679487666 inline_c_ffi_6989586621679487674 inline_c_ffi_6989586621679487679 inline_c_ffi_6989586621679487934 inline_c_ffi_6989586621679487954 inline_c_ffi_6989586621679487962 inline_c_ffi_6989586621679487967 inline_c_ffi_6989586621679488222 inline_c_ffi_6989586621679488230 inline_c_ffi_6989586621679488238 inline_c_ffi_6989586621679488243 inline_c_ffi_6989586621679488426 inline_c_ffi_6989586621679488434 inline_c_ffi_6989586621679488442 inline_c_ffi_6989586621679488447 inline_c_ffi_6989586621679488630 inline_c_ffi_6989586621679488644 inline_c_ffi_6989586621679488652 inline_c_ffi_6989586621679488657 inline_c_ffi_6989586621679488876 inline_c_ffi_6989586621679488890 inline_c_ffi_6989586621679488898 inline_c_ffi_6989586621679488903 inline_c_ffi_6989586621679489122 inline_c_ffi_6989586621679489142 inline_c_ffi_6989586621679489150 inline_c_ffi_6989586621679489155 inline_c_ffi_6989586621679489410 inline_c_ffi_6989586621679489430 inline_c_ffi_6989586621679489438 inline_c_ffi_6989586621679489443 inline_c_ffi_6989586621679489698 inline_c_ffi_6989586621679489709 inline_c_ffi_6989586621679489717 inline_c_ffi_6989586621679489722 inline_c_ffi_6989586621679489923 inline_c_ffi_6989586621679489934 inline_c_ffi_6989586621679489942 inline_c_ffi_6989586621679489947 inline_c_ffi_6989586621679490148 inline_c_ffi_6989586621679490168 inline_c_ffi_6989586621679490176 inline_c_ffi_6989586621679490181 inline_c_ffi_6989586621679490436 inline_c_ffi_6989586621679490456 inline_c_ffi_6989586621679490464 inline_c_ffi_6989586621679490469 inline_c_ffi_6989586621679490724 inline_c_ffi_6989586621679490753 inline_c_ffi_6989586621679490761 inline_c_ffi_6989586621679490766 inline_c_ffi_6989586621679491075 inline_c_ffi_6989586621679491104 inline_c_ffi_6989586621679491112 inline_c_ffi_6989586621679491117 inline_c_ffi_6989586621679491426 inline_c_ffi_6989586621679491464 inline_c_ffi_6989586621679491472 inline_c_ffi_6989586621679491477 inline_c_ffi_6989586621679491840 inline_c_ffi_6989586621679491878 inline_c_ffi_6989586621679491886 inline_c_ffi_6989586621679491891 inline_c_ffi_6989586621679492254 inline_c_ffi_6989586621679492268 inline_c_ffi_6989586621679492276 inline_c_ffi_6989586621679492281 inline_c_ffi_6989586621679492500 inline_c_ffi_6989586621679492514 inline_c_ffi_6989586621679492522 inline_c_ffi_6989586621679492527 inline_c_ffi_6989586621679492746 inline_c_ffi_6989586621679492784 inline_c_ffi_6989586621679492792 inline_c_ffi_6989586621679492797 inline_c_ffi_6989586621679493160 inline_c_ffi_6989586621679493198 inline_c_ffi_6989586621679493206 inline_c_ffi_6989586621679493211 inline_c_ffi_6989586621679493574 inline_c_ffi_6989586621679493624 inline_c_ffi_6989586621679493632 inline_c_ffi_6989586621679493637 inline_c_ffi_6989586621679494072 inline_c_ffi_6989586621679494122 inline_c_ffi_6989586621679494130 inline_c_ffi_6989586621679494135 inline_c_ffi_6989586621679494570 inline_c_ffi_6989586621679494587 inline_c_ffi_6989586621679494595 inline_c_ffi_6989586621679494600 inline_c_ffi_6989586621679494837 inline_c_ffi_6989586621679494854 inline_c_ffi_6989586621679494862 inline_c_ffi_6989586621679494867 inline_c_ffi_6989586621679495104 inline_c_ffi_6989586621679495124 inline_c_ffi_6989586621679495132 inline_c_ffi_6989586621679495137 inline_c_ffi_6989586621679495392 inline_c_ffi_6989586621679495412 inline_c_ffi_6989586621679495420 inline_c_ffi_6989586621679495425 inline_c_ffi_6989586621679495680 inline_c_ffi_6989586621679495688 inline_c_ffi_6989586621679495693 inline_c_ffi_6989586621679495830 inline_c_ffi_6989586621679495838 inline_c_ffi_6989586621679495843#repa-3.4.1.2-DMB50ySXpC65Ocf6jv4ubmData.Array.Repa.BaseArray inline_c_ffi_6989586621679551791D:R:ArrayMshdepth0 inline_c_ffi_6989586621679560111 inline_c_ffi_6989586621679560121 inline_c_ffi_6989586621679560131 inline_c_ffi_6989586621679560141 inline_c_ffi_6989586621679560151 inline_c_ffi_6989586621679560161 inline_c_ffi_6989586621679560171 inline_c_ffi_6989586621679560181 inline_c_ffi_6989586621679560191 inline_c_ffi_6989586621679560201 inline_c_ffi_6989586621679560211 inline_c_ffi_6989586621679560221 inline_c_ffi_6989586621679560231 inline_c_ffi_6989586621679560241 inline_c_ffi_6989586621679560251 inline_c_ffi_6989586621679560261 inline_c_ffi_6989586621679560271 inline_c_ffi_6989586621679560281 inline_c_ffi_6989586621679560291 inline_c_ffi_6989586621679560301 inline_c_ffi_6989586621679560311 inline_c_ffi_6989586621679560321 inline_c_ffi_6989586621679560331 inline_c_ffi_6989586621679560341 inline_c_ffi_6989586621679560351 inline_c_ffi_6989586621679560361 inline_c_ffi_6989586621679560371 inline_c_ffi_6989586621679560381 inline_c_ffi_6989586621679560391 inline_c_ffi_6989586621679560401 inline_c_ffi_6989586621679560411 inline_c_ffi_6989586621679560421 inline_c_ffi_6989586621679560431 inline_c_ffi_6989586621679560441 inline_c_ffi_6989586621679560451 inline_c_ffi_6989586621679560461 inline_c_ffi_6989586621679560471 inline_c_ffi_6989586621679560481 inline_c_ffi_6989586621679560491 inline_c_ffi_6989586621679560501 inline_c_ffi_6989586621679560511 inline_c_ffi_6989586621679560521 inline_c_ffi_6989586621679560531 repaToM23 repaToM33 $fToMatV3 $fToMatV2 $fFromMatV3 $fFromMatV2 $fToMatVec $fToMatVec0 $fToMatVec1 $fToMatVec2 $fToMatVec3 $fToMatVec4 $fToMatVec5 $fToMatVec6 $fToMatVec7 $fToMatMatx $fToMatMatx0 $fToMatMatx1 $fToMatMatx2 $fToMatMatx3 $fToMatMatx4 $fToMatMatx5 $fToMatMatx6 $fToMatMatx7 $fToMatMatx8 $fToMatMatx9 $fToMatMatx10 $fToMatMatx11 $fToMatMatx12 $fToMatMatx13 $fToMatMatx14 $fToMatMatx15 $fToMatMatx16 $fToMatMatx17 $fToMatMatx18 $fToMatMatx19 $fToMatMatx20 $fToMatMatx21 $fToMatMatx22 $fToMatMatx23 $fToMatMatx24 $fToMatMatx25 $fToMatMatx26 $fToMatMatx27 $fToMatMatx28 $fToMatMatx29 $fToMatMatx30 $fToMatMatx31 $fToMatMatx32 $fFromMatMat $fToMatMat inline_c_ffi_6989586621679599538 inline_c_ffi_6989586621679599556 inline_c_ffi_6989586621679599589 inline_c_ffi_6989586621679599623 inline_c_ffi_6989586621679621954 inline_c_ffi_6989586621679621964 inline_c_ffi_6989586621679621983 inline_c_ffi_6989586621679621992 inline_c_ffi_6989586621679622072 inline_c_ffi_6989586621679622083 inline_c_ffi_6989586621679622105 inline_c_ffi_6989586621679622122 inline_c_ffi_6989586621679622141 windowNamewindowMouseCallbackwindowTrackbars TrackbarStatetrackbarCallbacktrackbarValuePtr freeTrackbarmatchEventFlagc'EVENT_FLAG_LBUTTONc'EVENT_FLAG_RBUTTONc'EVENT_FLAG_MBUTTONc'EVENT_FLAG_CTRLKEYc'EVENT_FLAG_SHIFTKEYc'EVENT_FLAG_ALTKEYc'EVENT_MOUSEMOVEc'EVENT_LBUTTONDOWNc'EVENT_RBUTTONDOWNc'EVENT_MBUTTONDOWNc'EVENT_LBUTTONUPc'EVENT_RBUTTONUPc'EVENT_MBUTTONUPc'EVENT_LBUTTONDBLCLKc'EVENT_RBUTTONDBLCLKc'EVENT_MBUTTONDBLCLKc'EVENT_MOUSEWHEELc'EVENT_MOUSEHWHEELunmarshalEvent inline_c_ffi_6989586621679642305 inline_c_ffi_6989586621679642321 inline_c_ffi_6989586621679642345 inline_c_ffi_6989586621679649557 inline_c_ffi_6989586621679654115 inline_c_ffi_6989586621679654134 inline_c_ffi_6989586621679654147 inline_c_ffi_6989586621679654157 inline_c_ffi_6989586621679654167 inline_c_ffi_6989586621679654177 inline_c_ffi_6989586621679654191 inline_c_ffi_6989586621679654206 inline_c_ffi_6989586621679654220 inline_c_ffi_6989586621679654238 inline_c_ffi_6989586621679654255 inline_c_ffi_6989586621679654264unVideoCapture inline_c_ffi_6989586621679661080 inline_c_ffi_6989586621679661090 inline_c_ffi_6989586621679661100 inline_c_ffi_6989586621679661110 inline_c_ffi_6989586621679661136 inline_c_ffi_6989586621679661590 inline_c_ffi_6989586621679661595 inline_c_ffi_6989586621679661640 inline_c_ffi_6989586621679661671 inline_c_ffi_6989586621679661681 inline_c_ffi_6989586621679662460 inline_c_ffi_6989586621679662465 inline_c_ffi_6989586621679662499 inline_c_ffi_6989586621679662521 inline_c_ffi_6989586621679662531 unKeyPointunDMatch newKeyPoint newDMatch inline_c_ffi_6989586621679680804 inline_c_ffi_6989586621679680814 inline_c_ffi_6989586621679680842 inline_c_ffi_6989586621679680853 inline_c_ffi_6989586621679680877 inline_c_ffi_6989586621679680888 inline_c_ffi_6989586621679680938 inline_c_ffi_6989586621679680966 inline_c_ffi_6989586621679680978 inline_c_ffi_6989586621679681051 inline_c_ffi_6989586621679681075 inline_c_ffi_6989586621679681087 inline_c_ffi_6989586621679681101 inline_c_ffi_6989586621679681111 inline_c_ffi_6989586621679681129 inline_c_ffi_6989586621679681147 inline_c_ffi_6989586621679681161 inline_c_ffi_6989586621679681209 inline_c_ffi_6989586621679681219 inline_c_ffi_6989586621679681229 inline_c_ffi_6989586621679681238 inline_c_ffi_6989586621679681247DrawMatchesParams matchColorsinglePointColorflagsunFlannBasedMatcher unBFMatcher BaseMatcher unBaseMatcherunSimpleBlobDetectorunOrbinfinity marshalWTA_Kc'HARRIS_SCORE c'FAST_SCOREmarshalOrbScoreTypenewOrbnewSimpleBlobDetectormarshalIndexParamsmarshallSearchParams inline_c_ffi_6989586621679743848 inline_c_ffi_6989586621679743858 inline_c_ffi_6989586621679743897 inline_c_ffi_6989586621679743908 inline_c_ffi_6989586621679743952 inline_c_ffi_6989586621679743971 inline_c_ffi_6989586621679743980unCascadeClassifier inline_c_ffi_6989586621679756474c'COLORMAP_AUTUMNc'COLORMAP_BONEc'COLORMAP_JETc'COLORMAP_WINTERc'COLORMAP_RAINBOWc'COLORMAP_OCEANc'COLORMAP_SUMMERc'COLORMAP_SPRINGc'COLORMAP_COOLc'COLORMAP_HSVc'COLORMAP_PINKc'COLORMAP_HOTc'COLORMAP_PARULAmarshalColorMap inline_c_ffi_6989586621679760381 inline_c_ffi_6989586621679760412 inline_c_ffi_6989586621679760456 inline_c_ffi_6989586621679760484 inline_c_ffi_6989586621679760516 inline_c_ffi_6989586621679760555 inline_c_ffi_6989586621679760587 inline_c_ffi_6989586621679760611 inline_c_ffi_6989586621679760654 inline_c_ffi_6989586621679760682 inline_c_ffi_6989586621679760715 inline_c_ffi_6989586621679760733c'LINE_8c'LINE_4 c'LINE_AAmarshalLineType marshalFont c'FONT_ITALICmarshalFontSlantc'FONT_HERSHEY_SIMPLEXc'FONT_HERSHEY_PLAINc'FONT_HERSHEY_DUPLEXc'FONT_HERSHEY_COMPLEXc'FONT_HERSHEY_TRIPLEXc'FONT_HERSHEY_COMPLEX_SMALLc'FONT_HERSHEY_SCRIPT_SIMPLEXc'FONT_HERSHEY_SCRIPT_COMPLEXmarshalFontFaceghc-prim GHC.TypesTruemarshalContourDrawMode inline_c_ffi_6989586621679852235 inline_c_ffi_6989586621679852279 inline_c_ffi_6989586621679852290 inline_c_ffi_6989586621679852330 inline_c_ffi_6989586621679852341 inline_c_ffi_6989586621679852379 inline_c_ffi_6989586621679852390GHC.WordWord8Word16Float inline_c_ffi_6989586621679880487 inline_c_ffi_6989586621679880528 inline_c_ffi_6989586621679880563 inline_c_ffi_6989586621679880577 inline_c_ffi_6989586621679880610 inline_c_ffi_6989586621679880632marshalFloodFillOperationFlags inline_c_ffi_6989586621679892679c'CV_TM_SQDIFFc'CV_TM_SQDIFF_NORMED c'CV_TM_CCORRc'CV_TM_CCORR_NORMEDc'CV_TM_CCOEFFc'CV_TM_CCOEFF_NORMEDmarshalMatchTemplateMethod inline_c_ffi_6989586621679897002 inline_c_ffi_6989586621679897026 inline_c_ffi_6989586621679897056 inline_c_ffi_6989586621679897084 inline_c_ffi_6989586621679897113 inline_c_ffi_6989586621679897125 inline_c_ffi_6989586621679897145 inline_c_ffi_6989586621679897158c'CV_RETR_EXTERNALc'CV_RETR_LISTc'CV_RETR_CCOMPc'CV_RETR_TREEc'CV_CHAIN_APPROX_NONEc'CV_CHAIN_APPROX_SIMPLEc'CV_CHAIN_APPROX_TC89_L1c'CV_CHAIN_APPROX_TC89_KCOSmarshalContourRetrievalMode!marshalContourApproximationMethodproduct0$fToHElemsDouble$fToHElemsFloat$fToHElemsInt32$fToHElemsInt16$fToHElemsWord16$fToHElemsInt8$fToHElemsWord8c'INTER_NEARESTc'INTER_LINEAR c'INTER_CUBIC c'INTER_AREAc'INTER_LANCZOS4marshalInterpolationMethodc'BORDER_CONSTANTc'BORDER_REPLICATEc'BORDER_REFLECT c'BORDER_WRAPc'BORDER_REFLECT_101c'BORDER_TRANSPARENTc'BORDER_ISOLATEDmarshalBorderMode inline_c_ffi_6989586621679981778 inline_c_ffi_6989586621679981816 inline_c_ffi_6989586621679981854 inline_c_ffi_6989586621679981868 inline_c_ffi_6989586621679981882 inline_c_ffi_6989586621679981900 inline_c_ffi_6989586621679981930 inline_c_ffi_6989586621679981952marshalResizeAbsRelc'WARP_FILL_OUTLIERSc'WARP_INVERSE_MAP inline_c_ffi_6989586621679999571 inline_c_ffi_6989586621679999588 inline_c_ffi_6989586621679999607 inline_c_ffi_6989586621679999634 inline_c_ffi_6989586621679999670 inline_c_ffi_6989586621679999701 inline_c_ffi_6989586621679999737 inline_c_ffi_6989586621679999776 inline_c_ffi_6989586621679999800 defaultAnchor c'MORPH_RECTc'MORPH_ELLIPSE c'MORPH_CROSSmarshalMorphShape c'MORPH_OPEN c'MORPH_CLOSEc'MORPH_GRADIENTc'MORPH_TOPHATc'MORPH_BLACKHATmarshalMorphOperationJunJ inline_c_ffi_6989586621680390104 inline_c_ffi_6989586621680390122 inline_c_ffi_6989586621680390144 inline_c_ffi_6989586621680390158 inline_c_ffi_6989586621680390180 inline_c_ffi_6989586621680390194 inline_c_ffi_6989586621680390204 inline_c_ffi_6989586621680390217 inline_c_ffi_6989586621680390227 inline_c_ffi_6989586621680390240 inline_c_ffi_6989586621680390250 inline_c_ffi_6989586621680390260unBackgroundSubtractorMOG2unBackgroundSubtractorKNNFalse inline_c_ffi_6989586621680402313 inline_c_ffi_6989586621680402323 inline_c_ffi_6989586621680402333 inline_c_ffi_6989586621680402347 inline_c_ffi_6989586621680402356 unVideoWriter inline_c_ffi_6989586621680406189 inline_c_ffi_6989586621680406207 inline_c_ffi_6989586621680406221 inline_c_ffi_6989586621680406239 inline_c_ffi_6989586621680406257 inline_c_ffi_6989586621680406275 inline_c_ffi_6989586621680406308 inline_c_ffi_6989586621680406330 inline_c_ffi_6989586621680406348 inline_c_ffi_6989586621680406370 inline_c_ffi_6989586621680406384 inline_c_ffi_6989586621680406402 inline_c_ffi_6989586621680406420 inline_c_ffi_6989586621680406438 inline_c_ffi_6989586621680406455 inline_c_ffi_6989586621680406472 inline_c_ffi_6989586621680406498 inline_c_ffi_6989586621680406519 inline_c_ffi_6989586621680406545 inline_c_ffi_6989586621680406578 inline_c_ffi_6989586621680406593 inline_c_ffi_6989586621680406615 inline_c_ffi_6989586621680406633 inline_c_ffi_6989586621680406647 inline_c_ffi_6989586621680406664 inline_c_ffi_6989586621680406681marshallFlipDirection inline_c_ffi_6989586621680447933 inline_c_ffi_6989586621680447979 inline_c_ffi_6989586621680448008marshalFundamentalMatMethodmarshalWhichImagemarshalFindHomographyMethodplusPtrSpeekSpokeSisoApply