gi-gstvideo-1.0.24: GStreamerVideo bindings
CopyrightWill Thompson Iñaki García Etxebarria and Jonas Platte
LicenseLGPL-2.1
MaintainerIñaki García Etxebarria
Safe HaskellSafe-Inferred
LanguageHaskell2010

GI.GstVideo.Functions

Contents

Description

 
Synopsis

Methods

bufferAddVideoAfdMeta

bufferAddVideoAfdMeta Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Buffer

buffer: a Buffer

-> Word8

field: 0 for progressive or field 1 and 1 for field 2

-> VideoAFDSpec

spec: VideoAFDSpec that applies to AFD value

-> VideoAFDValue

afd: VideoAFDValue AFD enumeration

-> m VideoAFDMeta

Returns: the VideoAFDMeta on buffer.

Attaches VideoAFDMeta metadata to buffer with the given parameters.

Since: 1.18

bufferAddVideoAffineTransformationMeta

bufferAddVideoAffineTransformationMeta Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Buffer

buffer: a Buffer

-> m VideoAffineTransformationMeta

Returns: the VideoAffineTransformationMeta on buffer.

Attaches GstVideoAffineTransformationMeta metadata to buffer with the given parameters.

Since: 1.8

bufferAddVideoBarMeta

bufferAddVideoBarMeta Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Buffer

buffer: a Buffer

-> Word8

field: 0 for progressive or field 1 and 1 for field 2

-> Bool

isLetterbox: if true then bar data specifies letterbox, otherwise pillarbox

-> Word32

barData1: If isLetterbox is true, then the value specifies the last line of a horizontal letterbox bar area at top of reconstructed frame. Otherwise, it specifies the last horizontal luminance sample of a vertical pillarbox bar area at the left side of the reconstructed frame

-> Word32

barData2: If isLetterbox is true, then the value specifies the first line of a horizontal letterbox bar area at bottom of reconstructed frame. Otherwise, it specifies the first horizontal luminance sample of a vertical pillarbox bar area at the right side of the reconstructed frame.

-> m VideoBarMeta

Returns: the VideoBarMeta on buffer.

See Table 6.11 Bar Data Syntax

https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdf

Attaches VideoBarMeta metadata to buffer with the given parameters.

Since: 1.18

bufferAddVideoCaptionMeta

bufferAddVideoCaptionMeta Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Buffer

buffer: a Buffer

-> VideoCaptionType

captionType: The type of Closed Caption to add

-> ByteString

data: The Closed Caption data

-> m VideoCaptionMeta

Returns: the VideoCaptionMeta on buffer.

Attaches VideoCaptionMeta metadata to buffer with the given parameters.

Since: 1.16

bufferAddVideoGlTextureUploadMeta

bufferAddVideoGlTextureUploadMeta Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Buffer

buffer: a Buffer

-> VideoGLTextureOrientation

textureOrientation: the VideoGLTextureOrientation

-> Word32

nTextures: the number of textures

-> VideoGLTextureType

textureType: array of VideoGLTextureType

-> VideoGLTextureUpload

upload: the function to upload the buffer to a specific texture ID

-> BoxedCopyFunc

userDataCopy: function to copy userData

-> BoxedFreeFunc

userDataFree: function to free userData

-> m VideoGLTextureUploadMeta

Returns: the VideoGLTextureUploadMeta on buffer.

Attaches GstVideoGLTextureUploadMeta metadata to buffer with the given parameters.

bufferAddVideoMeta

bufferAddVideoMeta Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Buffer

buffer: a Buffer

-> [VideoFrameFlags]

flags: VideoFrameFlags

-> VideoFormat

format: a VideoFormat

-> Word32

width: the width

-> Word32

height: the height

-> m VideoMeta

Returns: the VideoMeta on buffer.

Attaches GstVideoMeta metadata to buffer with the given parameters and the default offsets and strides for format and width x height.

This function calculates the default offsets and strides and then calls bufferAddVideoMetaFull with them.

bufferAddVideoMetaFull

bufferAddVideoMetaFull Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Buffer

buffer: a Buffer

-> [VideoFrameFlags]

flags: VideoFrameFlags

-> VideoFormat

format: a VideoFormat

-> Word32

width: the width

-> Word32

height: the height

-> Word32

nPlanes: number of planes

-> [Word64]

offset: offset of each plane

-> [Int32]

stride: stride of each plane

-> m VideoMeta

Returns: the VideoMeta on buffer.

Attaches GstVideoMeta metadata to buffer with the given parameters.

bufferAddVideoOverlayCompositionMeta

bufferAddVideoOverlayCompositionMeta Source #

Sets an overlay composition on a buffer. The buffer will obtain its own reference to the composition, meaning this function does not take ownership of comp.

bufferAddVideoRegionOfInterestMeta

bufferAddVideoRegionOfInterestMeta Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Buffer

buffer: a Buffer

-> Text

roiType: Type of the region of interest (e.g. "face")

-> Word32

x: X position

-> Word32

y: Y position

-> Word32

w: width

-> Word32

h: height

-> m VideoRegionOfInterestMeta

Returns: the VideoRegionOfInterestMeta on buffer.

Attaches VideoRegionOfInterestMeta metadata to buffer with the given parameters.

bufferAddVideoRegionOfInterestMetaId

bufferAddVideoRegionOfInterestMetaId Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Buffer

buffer: a Buffer

-> Word32

roiType: Type of the region of interest (e.g. "face")

-> Word32

x: X position

-> Word32

y: Y position

-> Word32

w: width

-> Word32

h: height

-> m VideoRegionOfInterestMeta

Returns: the VideoRegionOfInterestMeta on buffer.

Attaches VideoRegionOfInterestMeta metadata to buffer with the given parameters.

bufferAddVideoTimeCodeMeta

bufferAddVideoTimeCodeMeta Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Buffer

buffer: a Buffer

-> VideoTimeCode

tc: a VideoTimeCode

-> m (Maybe VideoTimeCodeMeta)

Returns: the VideoTimeCodeMeta on buffer, or (since 1.16) Nothing if the timecode was invalid.

Attaches VideoTimeCodeMeta metadata to buffer with the given parameters.

Since: 1.10

bufferAddVideoTimeCodeMetaFull

bufferAddVideoTimeCodeMetaFull Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Buffer

buffer: a Buffer

-> Word32

fpsN: framerate numerator

-> Word32

fpsD: framerate denominator

-> DateTime

latestDailyJam: a DateTime for the latest daily jam

-> [VideoTimeCodeFlags]

flags: a VideoTimeCodeFlags

-> Word32

hours: hours since the daily jam

-> Word32

minutes: minutes since the daily jam

-> Word32

seconds: seconds since the daily jam

-> Word32

frames: frames since the daily jam

-> Word32

fieldCount: fields since the daily jam

-> m VideoTimeCodeMeta

Returns: the VideoTimeCodeMeta on buffer, or (since 1.16) Nothing if the timecode was invalid.

Attaches VideoTimeCodeMeta metadata to buffer with the given parameters.

Since: 1.10

bufferGetVideoMeta

bufferGetVideoMeta Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Buffer

buffer: a Buffer

-> m VideoMeta

Returns: the VideoMeta with lowest id (usually 0) or Nothing when there is no such metadata on buffer.

Find the VideoMeta on buffer with the lowest id.

Buffers can contain multiple VideoMeta metadata items when dealing with multiview buffers.

bufferGetVideoMetaId

bufferGetVideoMetaId Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Buffer

buffer: a Buffer

-> Int32

id: a metadata id

-> m VideoMeta

Returns: the VideoMeta with id or Nothing when there is no such metadata on buffer.

Find the VideoMeta on buffer with the given id.

Buffers can contain multiple VideoMeta metadata items when dealing with multiview buffers.

bufferGetVideoRegionOfInterestMetaId

bufferGetVideoRegionOfInterestMetaId Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Buffer

buffer: a Buffer

-> Int32

id: a metadata id

-> m VideoRegionOfInterestMeta

Returns: the VideoRegionOfInterestMeta with id or Nothing when there is no such metadata on buffer.

Find the VideoRegionOfInterestMeta on buffer with the given id.

Buffers can contain multiple VideoRegionOfInterestMeta metadata items if multiple regions of interests are marked on a frame.

bufferPoolConfigGetVideoAlignment

bufferPoolConfigGetVideoAlignment Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Structure

config: a Structure

-> VideoAlignment

align: a VideoAlignment

-> m Bool

Returns: True if config could be parsed correctly.

Get the video alignment from the bufferpool configuration config in in align

bufferPoolConfigSetVideoAlignment

bufferPoolConfigSetVideoAlignment Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Structure

config: a Structure

-> VideoAlignment

align: a VideoAlignment

-> m () 

Set the video alignment in align to the bufferpool configuration config

isVideoOverlayPrepareWindowHandleMessage

isVideoOverlayPrepareWindowHandleMessage Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Message

msg: a Message

-> m Bool

Returns: whether msg is a "prepare-window-handle" message

Convenience function to check if the given message is a "prepare-window-handle" message from a VideoOverlay.

videoAfdMetaApiGetType

videoAfdMetaApiGetType :: (HasCallStack, MonadIO m) => m GType Source #

No description available in the introspection data.

videoAffineTransformationMetaApiGetType

videoAffineTransformationMetaApiGetType :: (HasCallStack, MonadIO m) => m GType Source #

No description available in the introspection data.

videoBarMetaApiGetType

videoBarMetaApiGetType :: (HasCallStack, MonadIO m) => m GType Source #

No description available in the introspection data.

videoBlend

videoBlend Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> VideoFrame

dest: The VideoFrame where to blend src in

-> VideoFrame

src: the VideoFrame that we want to blend into

-> Int32

x: The x offset in pixel where the src image should be blended

-> Int32

y: the y offset in pixel where the src image should be blended

-> Float

globalAlpha: the global_alpha each per-pixel alpha value is multiplied with

-> m Bool 

Lets you blend the src image into the dest image

videoBlendScaleLinearRGBA

videoBlendScaleLinearRGBA Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> VideoInfo

src: the VideoInfo describing the video data in srcBuffer

-> Buffer

srcBuffer: the source buffer containing video pixels to scale

-> Int32

destHeight: the height in pixels to scale the video data in srcBuffer to

-> Int32

destWidth: the width in pixels to scale the video data in srcBuffer to

-> m (VideoInfo, Buffer) 

Scales a buffer containing RGBA (or AYUV) video. This is an internal helper function which is used to scale subtitle overlays, and may be deprecated in the near future. Use VideoScaler to scale video buffers instead.

videoCalculateDisplayRatio

videoCalculateDisplayRatio Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Word32

videoWidth: Width of the video frame in pixels

-> Word32

videoHeight: Height of the video frame in pixels

-> Word32

videoParN: Numerator of the pixel aspect ratio of the input video.

-> Word32

videoParD: Denominator of the pixel aspect ratio of the input video.

-> Word32

displayParN: Numerator of the pixel aspect ratio of the display device

-> Word32

displayParD: Denominator of the pixel aspect ratio of the display device

-> m (Bool, Word32, Word32)

Returns: A boolean indicating success and a calculated Display Ratio in the dar_n and dar_d parameters. The return value is FALSE in the case of integer overflow or other error.

Given the Pixel Aspect Ratio and size of an input video frame, and the pixel aspect ratio of the intended display device, calculates the actual display ratio the video will be rendered with.

videoCaptionMetaApiGetType

videoCaptionMetaApiGetType :: (HasCallStack, MonadIO m) => m GType Source #

No description available in the introspection data.

videoChromaFromString

videoChromaFromString Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Text

s: a chromasite string

-> m [VideoChromaSite]

Returns: a VideoChromaSite or VideoChromaSiteUnknown when s does not contain a valid chroma description.

Convert s to a VideoChromaSite

videoChromaResample

videoChromaResample Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> VideoChromaResample

resample: a VideoChromaResample

-> Ptr ()

lines: pixel lines

-> Int32

width: the number of pixels on one line

-> m () 

Perform resampling of width chroma pixels in lines.

videoChromaToString

videoChromaToString Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> [VideoChromaSite]

site: a VideoChromaSite

-> m Text

Returns: a string describing site.

Converts site to its string representation.

videoColorTransferDecode

videoColorTransferDecode Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> VideoTransferFunction

func: a VideoTransferFunction

-> Double

val: a value

-> m Double

Returns: the gamma decoded value of val

Convert val to its gamma decoded value. This is the inverse operation of gstVideoColorTransferEncode().

For a non-linear value L' in the range [0..1], conversion to the linear L is in general performed with a power function like:

  L = L' ^ gamma

Depending on func, different formulas might be applied. Some formulas encode a linear segment in the lower range.

Since: 1.6

videoColorTransferEncode

videoColorTransferEncode Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> VideoTransferFunction

func: a VideoTransferFunction

-> Double

val: a value

-> m Double

Returns: the gamma encoded value of val

Convert val to its gamma encoded value.

For a linear value L in the range [0..1], conversion to the non-linear (gamma encoded) L' is in general performed with a power function like:

  L' = L ^ (1 / gamma)

Depending on func, different formulas might be applied. Some formulas encode a linear segment in the lower range.

Since: 1.6

videoConvertSample

videoConvertSample Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Sample

sample: a Sample

-> Caps

toCaps: the Caps to convert to

-> Word64

timeout: the maximum amount of time allowed for the processing.

-> m Sample

Returns: The converted Sample, or Nothing if an error happened (in which case err will point to the GError). (Can throw GError)

Converts a raw video buffer into the specified output caps.

The output caps can be any raw video formats or any image formats (jpeg, png, ...).

The width, height and pixel-aspect-ratio can also be specified in the output caps.

videoConvertSampleAsync

videoConvertSampleAsync Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Sample

sample: a Sample

-> Caps

toCaps: the Caps to convert to

-> Word64

timeout: the maximum amount of time allowed for the processing.

-> VideoConvertSampleCallback

callback: GstVideoConvertSampleCallback that will be called after conversion.

-> m () 

Converts a raw video buffer into the specified output caps.

The output caps can be any raw video formats or any image formats (jpeg, png, ...).

The width, height and pixel-aspect-ratio can also be specified in the output caps.

callback will be called after conversion, when an error occurred or if conversion didn't finish after timeout. callback will always be called from the thread default GMainContext, see mainContextGetThreadDefault. If GLib before 2.22 is used, this will always be the global default main context.

destroyNotify will be called after the callback was called and userData is not needed anymore.

videoCropMetaApiGetType

videoCropMetaApiGetType :: (HasCallStack, MonadIO m) => m GType Source #

No description available in the introspection data.

videoEventIsForceKeyUnit

videoEventIsForceKeyUnit Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Event

event: A Event to check

-> m Bool

Returns: True if the event is a valid force key unit event

Checks if an event is a force key unit event. Returns true for both upstream and downstream force key unit events.

videoEventNewDownstreamForceKeyUnit

videoEventNewDownstreamForceKeyUnit Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Word64

timestamp: the timestamp of the buffer that starts a new key unit

-> Word64

streamTime: the stream_time of the buffer that starts a new key unit

-> Word64

runningTime: the running_time of the buffer that starts a new key unit

-> Bool

allHeaders: True to produce headers when starting a new key unit

-> Word32

count: integer that can be used to number key units

-> m Event

Returns: The new GstEvent

Creates a new downstream force key unit event. A downstream force key unit event can be sent down the pipeline to request downstream elements to produce a key unit. A downstream force key unit event must also be sent when handling an upstream force key unit event to notify downstream that the latter has been handled.

To parse an event created by videoEventNewDownstreamForceKeyUnit use videoEventParseDownstreamForceKeyUnit.

videoEventNewStillFrame

videoEventNewStillFrame Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Bool

inStill: boolean value for the still-frame state of the event.

-> m Event

Returns: The new GstEvent

Creates a new Still Frame event. If inStill is True, then the event represents the start of a still frame sequence. If it is False, then the event ends a still frame sequence.

To parse an event created by videoEventNewStillFrame use videoEventParseStillFrame.

videoEventNewUpstreamForceKeyUnit

videoEventNewUpstreamForceKeyUnit Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Word64

runningTime: the running_time at which a new key unit should be produced

-> Bool

allHeaders: True to produce headers when starting a new key unit

-> Word32

count: integer that can be used to number key units

-> m Event

Returns: The new GstEvent

Creates a new upstream force key unit event. An upstream force key unit event can be sent to request upstream elements to produce a key unit.

runningTime can be set to request a new key unit at a specific running_time. If set to GST_CLOCK_TIME_NONE, upstream elements will produce a new key unit as soon as possible.

To parse an event created by videoEventNewDownstreamForceKeyUnit use videoEventParseDownstreamForceKeyUnit.

videoEventParseDownstreamForceKeyUnit

videoEventParseDownstreamForceKeyUnit Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Event

event: A Event to parse

-> m (Bool, Word64, Word64, Word64, Bool, Word32)

Returns: True if the event is a valid downstream force key unit event.

Get timestamp, stream-time, running-time, all-headers and count in the force key unit event. See videoEventNewDownstreamForceKeyUnit for a full description of the downstream force key unit event.

runningTime will be adjusted for any pad offsets of pads it was passing through.

videoEventParseStillFrame

videoEventParseStillFrame Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Event

event: A Event to parse

-> m (Bool, Bool)

Returns: True if the event is a valid still-frame event. False if not

Parse a Event, identify if it is a Still Frame event, and return the still-frame state from the event if it is. If the event represents the start of a still frame, the in_still variable will be set to TRUE, otherwise FALSE. It is OK to pass NULL for the in_still variable order to just check whether the event is a valid still-frame event.

Create a still frame event using videoEventNewStillFrame

videoEventParseUpstreamForceKeyUnit

videoEventParseUpstreamForceKeyUnit Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Event

event: A Event to parse

-> m (Bool, Word64, Bool, Word32)

Returns: True if the event is a valid upstream force-key-unit event. False if not

Get running-time, all-headers and count in the force key unit event. See videoEventNewUpstreamForceKeyUnit for a full description of the upstream force key unit event.

Create an upstream force key unit event using videoEventNewUpstreamForceKeyUnit

runningTime will be adjusted for any pad offsets of pads it was passing through.

videoFormatsRaw

videoFormatsRaw Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> m [VideoFormat]

Returns: an array of VideoFormat

Return all the raw video formats supported by GStreamer.

Since: 1.18

videoGlTextureUploadMetaApiGetType

videoGlTextureUploadMetaApiGetType :: (HasCallStack, MonadIO m) => m GType Source #

No description available in the introspection data.

videoGuessFramerate

videoGuessFramerate Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Word64

duration: Nominal duration of one frame

-> m (Bool, Int32, Int32)

Returns: True if a close "standard" framerate was recognised, and False otherwise.

Given the nominal duration of one video frame, this function will check some standard framerates for a close match (within 0.1%) and return one if possible,

It will calculate an arbitrary framerate if no close match was found, and return False.

It returns False if a duration of 0 is passed.

Since: 1.6

videoMakeRawCaps

videoMakeRawCaps Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Maybe [VideoFormat]

formats: an array of raw VideoFormat, or Nothing

-> m Caps

Returns: a video gstCaps

Return a generic raw video caps for formats defined in formats. If formats is Nothing returns a caps for all the supported raw video formats, see videoFormatsRaw.

Since: 1.18

videoMakeRawCapsWithFeatures

videoMakeRawCapsWithFeatures Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> Maybe [VideoFormat]

formats: an array of raw VideoFormat, or Nothing

-> Maybe CapsFeatures

features: the CapsFeatures to set on the caps

-> m Caps

Returns: a video gstCaps

Return a generic raw video caps for formats defined in formats with features features. If formats is Nothing returns a caps for all the supported video formats, see videoFormatsRaw.

Since: 1.18

videoMetaApiGetType

videoMetaApiGetType :: (HasCallStack, MonadIO m) => m GType Source #

No description available in the introspection data.

videoMultiviewGetDoubledHeightModes

videoMultiviewGetDoubledHeightModes Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> m GValue

Returns: A const Value containing a list of stereo video modes

Utility function that returns a Value with a GstList of packed stereo video modes with double the height of a single view for use in caps negotiations. Currently this is top-bottom and row-interleaved.

No description available in the introspection data.

Since: 1.6

videoMultiviewGetDoubledSizeModes

videoMultiviewGetDoubledSizeModes Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> m GValue

Returns: A const Value containing a list of stereo video modes

Utility function that returns a Value with a GstList of packed stereo video modes that have double the width/height of a single view for use in caps negotiation. Currently this is just 'checkerboard' layout.

No description available in the introspection data.

Since: 1.6

videoMultiviewGetDoubledWidthModes

videoMultiviewGetDoubledWidthModes Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> m GValue

Returns: A const Value containing a list of stereo video modes

Utility function that returns a Value with a GstList of packed stereo video modes with double the width of a single view for use in caps negotiations. Currently this is side-by-side, side-by-side-quincunx and column-interleaved.

No description available in the introspection data.

Since: 1.6

videoMultiviewGetMonoModes

videoMultiviewGetMonoModes Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> m GValue

Returns: A const Value containing a list of mono video modes

Utility function that returns a Value with a GstList of mono video modes (mono/left/right) for use in caps negotiations.

No description available in the introspection data.

Since: 1.6

videoMultiviewGetUnpackedModes

videoMultiviewGetUnpackedModes Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> m GValue

Returns: A const Value containing a list of 'unpacked' stereo video modes

Utility function that returns a Value with a GstList of unpacked stereo video modes (separated/frame-by-frame/frame-by-frame-multiview) for use in caps negotiations.

No description available in the introspection data.

Since: 1.6

videoMultiviewGuessHalfAspect

videoMultiviewGuessHalfAspect Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> VideoMultiviewMode

mvMode: A VideoMultiviewMode

-> Word32

width: Video frame width in pixels

-> Word32

height: Video frame height in pixels

-> Word32

parN: Numerator of the video pixel-aspect-ratio

-> Word32

parD: Denominator of the video pixel-aspect-ratio

-> m Bool

Returns: A boolean indicating whether the GST_VIDEO_MULTIVIEW_FLAGS_HALF_ASPECT flag should be set.

Utility function that heuristically guess whether a frame-packed stereoscopic video contains half width/height encoded views, or full-frame views by looking at the overall display aspect ratio.

No description available in the introspection data.

Since: 1.6

videoMultiviewVideoInfoChangeMode

videoMultiviewVideoInfoChangeMode Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> VideoInfo

info: A VideoInfo structure to operate on

-> VideoMultiviewMode

outMviewMode: A VideoMultiviewMode value

-> [VideoMultiviewFlags]

outMviewFlags: A set of VideoMultiviewFlags

-> m () 

Utility function that transforms the width/height/PAR and multiview mode and flags of a VideoInfo into the requested mode.

Since: 1.6

videoOverlayCompositionMetaApiGetType

videoOverlayCompositionMetaApiGetType :: (HasCallStack, MonadIO m) => m GType Source #

No description available in the introspection data.

videoRegionOfInterestMetaApiGetType

videoRegionOfInterestMetaApiGetType :: (HasCallStack, MonadIO m) => m GType Source #

No description available in the introspection data.

videoTileGetIndex

videoTileGetIndex Source #

Arguments

:: (HasCallStack, MonadIO m) 
=> VideoTileMode

mode: a VideoTileMode

-> Int32

x: x coordinate

-> Int32

y: y coordinate

-> Int32

xTiles: number of horizintal tiles

-> Int32

yTiles: number of vertical tiles

-> m Word32

Returns: the index of the tile at x and y in the tiled image of xTiles by yTiles.

Get the tile index of the tile at coordinates x and y in the tiled image of xTiles by yTiles.

Use this method when mode is of type VideoTileTypeIndexed.

Since: 1.4

videoTimeCodeMetaApiGetType

videoTimeCodeMetaApiGetType :: (HasCallStack, MonadIO m) => m GType Source #

No description available in the introspection data.