Copyright | (c) 2015-2016 Brendan Hay |
---|---|
License | Mozilla Public License, v. 2.0. |
Maintainer | Brendan Hay <brendan.g.hay@gmail.com> |
Stability | auto-generated |
Portability | non-portable (GHC extensions) |
Safe Haskell | None |
Language | Haskell2010 |
- Service Configuration
- OAuth Scopes
- API Declaration
- Resources
- Types
- GoogleRpc_StatusDetailsItem
- GoogleCloudVideointelligenceV1beta2_ExplicitContentAnnotation
- GoogleCloudVideointelligenceV1_SpeechRecognitionAlternative
- GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest
- GoogleCloudVideointelligenceV1beta2_AnnotateVideoResponse
- GoogleCloudVideointelligenceV1_WordInfo
- GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFrame
- GoogleCloudVideointelligenceV1beta2_Entity
- GoogleCloudVideointelligenceV1p2beta1_TextAnnotation
- GoogleCloudVideointelligenceV1p2beta1_VideoSegment
- GoogleCloudVideointelligenceV1_VideoAnnotationProgress
- GoogleCloudVideointelligenceV1beta2_LabelFrame
- GoogleCloudVideointelligenceV1_SpeechTranscription
- GoogleCloudVideointelligenceV1beta2_AnnotateVideoProgress
- GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingFrame
- GoogleCloudVideointelligenceV1_LabelAnnotation
- GoogleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative
- GoogleCloudVideointelligenceV1p2beta1_WordInfo
- GoogleCloudVideointelligenceV1p1beta1_LabelFrame
- GoogleCloudVideointelligenceV1p1beta1_ShotChangeDetectionConfig
- GoogleCloudVideointelligenceV1p2beta1_ExplicitContentAnnotation
- GoogleCloudVideointelligenceV1_ExplicitContentFramePornographyLikelihood
- GoogleCloudVideointelligenceV1p1beta1_Entity
- GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoResponse
- GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoProgress
- GoogleCloudVideointelligenceV1_VideoAnnotationResults
- GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation
- GoogleCloudVideointelligenceV1p1beta1_VideoContext
- GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoProgress
- GoogleLongrunning_OperationMetadata
- GoogleCloudVideointelligenceV1p1beta1_LabelSegment
- GoogleCloudVideointelligenceV1p2beta1_LabelFrame
- GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress
- GoogleCloudVideointelligenceV1p2beta1_Entity
- GoogleCloudVideointelligenceV1p1beta1_WordInfo
- GoogleLongrunning_Operation
- GoogleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative
- GoogleCloudVideointelligenceV1_ExplicitContentFrame
- GoogleCloudVideointelligenceV1beta2_VideoSegment
- GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults
- GoogleCloudVideointelligenceV1beta2_LabelSegment
- GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingPoly
- GoogleCloudVideointelligenceV1beta2_WordInfo
- GoogleCloudVideointelligenceV1_ExplicitContentAnnotation
- GoogleCloudVideointelligenceV1_AnnotateVideoResponse
- GoogleCloudVideointelligenceV1p2beta1_NormalizedVertex
- GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation
- GoogleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative
- GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFramePornographyLikelihood
- GoogleCloudVideointelligenceV1p1beta1_VideoSegment
- GoogleCloudVideointelligenceV1p1beta1_ExplicitContentDetectionConfig
- GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation
- GoogleCloudVideointelligenceV1_LabelFrame
- GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfigLabelDetectionMode
- GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFramePornographyLikelihood
- GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFrame
- GoogleCloudVideointelligenceV1_Entity
- GoogleCloudVideointelligenceV1beta2_VideoAnnotationProgress
- GoogleCloudVideointelligenceV1beta2_SpeechTranscription
- GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults
- Xgafv
- GoogleCloudVideointelligenceV1_AnnotateVideoProgress
- GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig
- GoogleCloudVideointelligenceV1beta2_ExplicitContentFramePornographyLikelihood
- GoogleLongrunning_OperationResponse
- GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress
- GoogleCloudVideointelligenceV1p2beta1_TextFrame
- GoogleCloudVideointelligenceV1beta2_LabelAnnotation
- GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfig
- GoogleCloudVideointelligenceV1p1beta1_SpeechTranscription
- GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults
- GoogleCloudVideointelligenceV1p2beta1_LabelSegment
- GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox
- GoogleCloudVideointelligenceV1p2beta1_TextSegment
- GoogleCloudVideointelligenceV1p2beta1_SpeechTranscription
- GoogleRpc_Status
- GoogleCloudVideointelligenceV1_VideoSegment
- GoogleCloudVideointelligenceV1p1beta1_ExplicitContentAnnotation
- GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoResponse
- GoogleCloudVideointelligenceV1beta2_ExplicitContentFrame
- GoogleCloudVideointelligenceV1p1beta1_SpeechContext
- GoogleCloudVideointelligenceV1_LabelSegment
Detects objects, explicit content, and scene changes in videos. It also specifies the region for annotation and transcribes speech to text.
Synopsis
- videoIntelligenceService :: ServiceConfig
- cloudPlatformScope :: Proxy '["https://www.googleapis.com/auth/cloud-platform"]
- type VideoIntelligenceAPI = VideosAnnotateResource
- module Network.Google.Resource.VideoIntelligence.Videos.Annotate
- data GoogleRpc_StatusDetailsItem
- googleRpc_StatusDetailsItem :: HashMap Text JSONValue -> GoogleRpc_StatusDetailsItem
- grsdiAddtional :: Lens' GoogleRpc_StatusDetailsItem (HashMap Text JSONValue)
- data GoogleCloudVideointelligenceV1beta2_ExplicitContentAnnotation
- googleCloudVideointelligenceV1beta2_ExplicitContentAnnotation :: GoogleCloudVideointelligenceV1beta2_ExplicitContentAnnotation
- gcvvecaFrames :: Lens' GoogleCloudVideointelligenceV1beta2_ExplicitContentAnnotation [GoogleCloudVideointelligenceV1beta2_ExplicitContentFrame]
- data GoogleCloudVideointelligenceV1_SpeechRecognitionAlternative
- googleCloudVideointelligenceV1_SpeechRecognitionAlternative :: GoogleCloudVideointelligenceV1_SpeechRecognitionAlternative
- gcvvsraConfidence :: Lens' GoogleCloudVideointelligenceV1_SpeechRecognitionAlternative (Maybe Double)
- gcvvsraWords :: Lens' GoogleCloudVideointelligenceV1_SpeechRecognitionAlternative [GoogleCloudVideointelligenceV1_WordInfo]
- gcvvsraTranscript :: Lens' GoogleCloudVideointelligenceV1_SpeechRecognitionAlternative (Maybe Text)
- data GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest
- googleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest :: GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest
- gcvvavrInputURI :: Lens' GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest (Maybe Text)
- gcvvavrVideoContext :: Lens' GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest (Maybe GoogleCloudVideointelligenceV1p1beta1_VideoContext)
- gcvvavrInputContent :: Lens' GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest (Maybe ByteString)
- gcvvavrFeatures :: Lens' GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest [Text]
- gcvvavrLocationId :: Lens' GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest (Maybe Text)
- gcvvavrOutputURI :: Lens' GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest (Maybe Text)
- data GoogleCloudVideointelligenceV1beta2_AnnotateVideoResponse
- googleCloudVideointelligenceV1beta2_AnnotateVideoResponse :: GoogleCloudVideointelligenceV1beta2_AnnotateVideoResponse
- gcvvavrAnnotationResults :: Lens' GoogleCloudVideointelligenceV1beta2_AnnotateVideoResponse [GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults]
- data GoogleCloudVideointelligenceV1_WordInfo
- googleCloudVideointelligenceV1_WordInfo :: GoogleCloudVideointelligenceV1_WordInfo
- gcvvwiStartTime :: Lens' GoogleCloudVideointelligenceV1_WordInfo (Maybe Scientific)
- gcvvwiConfidence :: Lens' GoogleCloudVideointelligenceV1_WordInfo (Maybe Double)
- gcvvwiEndTime :: Lens' GoogleCloudVideointelligenceV1_WordInfo (Maybe Scientific)
- gcvvwiWord :: Lens' GoogleCloudVideointelligenceV1_WordInfo (Maybe Text)
- gcvvwiSpeakerTag :: Lens' GoogleCloudVideointelligenceV1_WordInfo (Maybe Int32)
- data GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFrame
- googleCloudVideointelligenceV1p1beta1_ExplicitContentFrame :: GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFrame
- gcvvecfTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFrame (Maybe Scientific)
- gcvvecfPornographyLikelihood :: Lens' GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFrame (Maybe GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFramePornographyLikelihood)
- data GoogleCloudVideointelligenceV1beta2_Entity
- googleCloudVideointelligenceV1beta2_Entity :: GoogleCloudVideointelligenceV1beta2_Entity
- gcvveLanguageCode :: Lens' GoogleCloudVideointelligenceV1beta2_Entity (Maybe Text)
- gcvveEntityId :: Lens' GoogleCloudVideointelligenceV1beta2_Entity (Maybe Text)
- gcvveDescription :: Lens' GoogleCloudVideointelligenceV1beta2_Entity (Maybe Text)
- data GoogleCloudVideointelligenceV1p2beta1_TextAnnotation
- googleCloudVideointelligenceV1p2beta1_TextAnnotation :: GoogleCloudVideointelligenceV1p2beta1_TextAnnotation
- gcvvtaText :: Lens' GoogleCloudVideointelligenceV1p2beta1_TextAnnotation (Maybe Text)
- gcvvtaSegments :: Lens' GoogleCloudVideointelligenceV1p2beta1_TextAnnotation [GoogleCloudVideointelligenceV1p2beta1_TextSegment]
- data GoogleCloudVideointelligenceV1p2beta1_VideoSegment
- googleCloudVideointelligenceV1p2beta1_VideoSegment :: GoogleCloudVideointelligenceV1p2beta1_VideoSegment
- gcvvvsStartTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoSegment (Maybe Scientific)
- gcvvvsEndTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoSegment (Maybe Scientific)
- data GoogleCloudVideointelligenceV1_VideoAnnotationProgress
- googleCloudVideointelligenceV1_VideoAnnotationProgress :: GoogleCloudVideointelligenceV1_VideoAnnotationProgress
- gcvvvapStartTime :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationProgress (Maybe UTCTime)
- gcvvvapInputURI :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationProgress (Maybe Text)
- gcvvvapProgressPercent :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationProgress (Maybe Int32)
- gcvvvapUpdateTime :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationProgress (Maybe UTCTime)
- data GoogleCloudVideointelligenceV1beta2_LabelFrame
- googleCloudVideointelligenceV1beta2_LabelFrame :: GoogleCloudVideointelligenceV1beta2_LabelFrame
- gcvvlfTimeOffSet :: Lens' GoogleCloudVideointelligenceV1beta2_LabelFrame (Maybe Scientific)
- gcvvlfConfidence :: Lens' GoogleCloudVideointelligenceV1beta2_LabelFrame (Maybe Double)
- data GoogleCloudVideointelligenceV1_SpeechTranscription
- googleCloudVideointelligenceV1_SpeechTranscription :: GoogleCloudVideointelligenceV1_SpeechTranscription
- gcvvstAlternatives :: Lens' GoogleCloudVideointelligenceV1_SpeechTranscription [GoogleCloudVideointelligenceV1_SpeechRecognitionAlternative]
- gcvvstLanguageCode :: Lens' GoogleCloudVideointelligenceV1_SpeechTranscription (Maybe Text)
- data GoogleCloudVideointelligenceV1beta2_AnnotateVideoProgress
- googleCloudVideointelligenceV1beta2_AnnotateVideoProgress :: GoogleCloudVideointelligenceV1beta2_AnnotateVideoProgress
- gcvvavpAnnotationProgress :: Lens' GoogleCloudVideointelligenceV1beta2_AnnotateVideoProgress [GoogleCloudVideointelligenceV1beta2_VideoAnnotationProgress]
- data GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingFrame
- googleCloudVideointelligenceV1p2beta1_ObjectTrackingFrame :: GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingFrame
- gcvvotfTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingFrame (Maybe Scientific)
- gcvvotfNormalizedBoundingBox :: Lens' GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingFrame (Maybe GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox)
- data GoogleCloudVideointelligenceV1_LabelAnnotation
- googleCloudVideointelligenceV1_LabelAnnotation :: GoogleCloudVideointelligenceV1_LabelAnnotation
- gcvvlaCategoryEntities :: Lens' GoogleCloudVideointelligenceV1_LabelAnnotation [GoogleCloudVideointelligenceV1_Entity]
- gcvvlaFrames :: Lens' GoogleCloudVideointelligenceV1_LabelAnnotation [GoogleCloudVideointelligenceV1_LabelFrame]
- gcvvlaSegments :: Lens' GoogleCloudVideointelligenceV1_LabelAnnotation [GoogleCloudVideointelligenceV1_LabelSegment]
- gcvvlaEntity :: Lens' GoogleCloudVideointelligenceV1_LabelAnnotation (Maybe GoogleCloudVideointelligenceV1_Entity)
- data GoogleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative
- googleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative :: GoogleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative
- gConfidence :: Lens' GoogleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative (Maybe Double)
- gWords :: Lens' GoogleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative [GoogleCloudVideointelligenceV1p2beta1_WordInfo]
- gTranscript :: Lens' GoogleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative (Maybe Text)
- data GoogleCloudVideointelligenceV1p2beta1_WordInfo
- googleCloudVideointelligenceV1p2beta1_WordInfo :: GoogleCloudVideointelligenceV1p2beta1_WordInfo
- gooStartTime :: Lens' GoogleCloudVideointelligenceV1p2beta1_WordInfo (Maybe Scientific)
- gooConfidence :: Lens' GoogleCloudVideointelligenceV1p2beta1_WordInfo (Maybe Double)
- gooEndTime :: Lens' GoogleCloudVideointelligenceV1p2beta1_WordInfo (Maybe Scientific)
- gooWord :: Lens' GoogleCloudVideointelligenceV1p2beta1_WordInfo (Maybe Text)
- gooSpeakerTag :: Lens' GoogleCloudVideointelligenceV1p2beta1_WordInfo (Maybe Int32)
- data GoogleCloudVideointelligenceV1p1beta1_LabelFrame
- googleCloudVideointelligenceV1p1beta1_LabelFrame :: GoogleCloudVideointelligenceV1p1beta1_LabelFrame
- gcvvlfcTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelFrame (Maybe Scientific)
- gcvvlfcConfidence :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelFrame (Maybe Double)
- data GoogleCloudVideointelligenceV1p1beta1_ShotChangeDetectionConfig
- googleCloudVideointelligenceV1p1beta1_ShotChangeDetectionConfig :: GoogleCloudVideointelligenceV1p1beta1_ShotChangeDetectionConfig
- gcvvscdcModel :: Lens' GoogleCloudVideointelligenceV1p1beta1_ShotChangeDetectionConfig (Maybe Text)
- data GoogleCloudVideointelligenceV1p2beta1_ExplicitContentAnnotation
- googleCloudVideointelligenceV1p2beta1_ExplicitContentAnnotation :: GoogleCloudVideointelligenceV1p2beta1_ExplicitContentAnnotation
- gFrames :: Lens' GoogleCloudVideointelligenceV1p2beta1_ExplicitContentAnnotation [GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFrame]
- data GoogleCloudVideointelligenceV1_ExplicitContentFramePornographyLikelihood
- data GoogleCloudVideointelligenceV1p1beta1_Entity
- googleCloudVideointelligenceV1p1beta1_Entity :: GoogleCloudVideointelligenceV1p1beta1_Entity
- gLanguageCode :: Lens' GoogleCloudVideointelligenceV1p1beta1_Entity (Maybe Text)
- gEntityId :: Lens' GoogleCloudVideointelligenceV1p1beta1_Entity (Maybe Text)
- gDescription :: Lens' GoogleCloudVideointelligenceV1p1beta1_Entity (Maybe Text)
- data GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoResponse
- googleCloudVideointelligenceV1p2beta1_AnnotateVideoResponse :: GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoResponse
- gAnnotationResults :: Lens' GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoResponse [GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults]
- data GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoProgress
- googleCloudVideointelligenceV1p1beta1_AnnotateVideoProgress :: GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoProgress
- gAnnotationProgress :: Lens' GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoProgress [GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress]
- data GoogleCloudVideointelligenceV1_VideoAnnotationResults
- googleCloudVideointelligenceV1_VideoAnnotationResults :: GoogleCloudVideointelligenceV1_VideoAnnotationResults
- gcvvvarShotAnnotations :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationResults [GoogleCloudVideointelligenceV1_VideoSegment]
- gcvvvarShotLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationResults [GoogleCloudVideointelligenceV1_LabelAnnotation]
- gcvvvarInputURI :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationResults (Maybe Text)
- gcvvvarError :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationResults (Maybe GoogleRpc_Status)
- gcvvvarFrameLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationResults [GoogleCloudVideointelligenceV1_LabelAnnotation]
- gcvvvarSpeechTranscriptions :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationResults [GoogleCloudVideointelligenceV1_SpeechTranscription]
- gcvvvarSegmentLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationResults [GoogleCloudVideointelligenceV1_LabelAnnotation]
- gcvvvarExplicitAnnotation :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationResults (Maybe GoogleCloudVideointelligenceV1_ExplicitContentAnnotation)
- data GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation
- googleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation :: GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation
- gcvvotaFrames :: Lens' GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation [GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingFrame]
- gcvvotaConfidence :: Lens' GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation (Maybe Double)
- gcvvotaSegment :: Lens' GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation (Maybe GoogleCloudVideointelligenceV1p2beta1_VideoSegment)
- gcvvotaEntity :: Lens' GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation (Maybe GoogleCloudVideointelligenceV1p2beta1_Entity)
- data GoogleCloudVideointelligenceV1p1beta1_VideoContext
- googleCloudVideointelligenceV1p1beta1_VideoContext :: GoogleCloudVideointelligenceV1p1beta1_VideoContext
- gcvvvcSpeechTranscriptionConfig :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoContext (Maybe GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig)
- gcvvvcExplicitContentDetectionConfig :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoContext (Maybe GoogleCloudVideointelligenceV1p1beta1_ExplicitContentDetectionConfig)
- gcvvvcLabelDetectionConfig :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoContext (Maybe GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfig)
- gcvvvcSegments :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoContext [GoogleCloudVideointelligenceV1p1beta1_VideoSegment]
- gcvvvcShotChangeDetectionConfig :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoContext (Maybe GoogleCloudVideointelligenceV1p1beta1_ShotChangeDetectionConfig)
- data GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoProgress
- googleCloudVideointelligenceV1p2beta1_AnnotateVideoProgress :: GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoProgress
- gcvvavpsAnnotationProgress :: Lens' GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoProgress [GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress]
- data GoogleLongrunning_OperationMetadata
- googleLongrunning_OperationMetadata :: HashMap Text JSONValue -> GoogleLongrunning_OperationMetadata
- glomAddtional :: Lens' GoogleLongrunning_OperationMetadata (HashMap Text JSONValue)
- data GoogleCloudVideointelligenceV1p1beta1_LabelSegment
- googleCloudVideointelligenceV1p1beta1_LabelSegment :: GoogleCloudVideointelligenceV1p1beta1_LabelSegment
- gcvvlsConfidence :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelSegment (Maybe Double)
- gcvvlsSegment :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelSegment (Maybe GoogleCloudVideointelligenceV1p1beta1_VideoSegment)
- data GoogleCloudVideointelligenceV1p2beta1_LabelFrame
- googleCloudVideointelligenceV1p2beta1_LabelFrame :: GoogleCloudVideointelligenceV1p2beta1_LabelFrame
- ggTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p2beta1_LabelFrame (Maybe Scientific)
- ggConfidence :: Lens' GoogleCloudVideointelligenceV1p2beta1_LabelFrame (Maybe Double)
- data GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress
- googleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress :: GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress
- gStartTime :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress (Maybe UTCTime)
- gInputURI :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress (Maybe Text)
- gProgressPercent :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress (Maybe Int32)
- gUpdateTime :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress (Maybe UTCTime)
- data GoogleCloudVideointelligenceV1p2beta1_Entity
- googleCloudVideointelligenceV1p2beta1_Entity :: GoogleCloudVideointelligenceV1p2beta1_Entity
- gooLanguageCode :: Lens' GoogleCloudVideointelligenceV1p2beta1_Entity (Maybe Text)
- gooEntityId :: Lens' GoogleCloudVideointelligenceV1p2beta1_Entity (Maybe Text)
- gooDescription :: Lens' GoogleCloudVideointelligenceV1p2beta1_Entity (Maybe Text)
- data GoogleCloudVideointelligenceV1p1beta1_WordInfo
- googleCloudVideointelligenceV1p1beta1_WordInfo :: GoogleCloudVideointelligenceV1p1beta1_WordInfo
- gcvvwicStartTime :: Lens' GoogleCloudVideointelligenceV1p1beta1_WordInfo (Maybe Scientific)
- gcvvwicConfidence :: Lens' GoogleCloudVideointelligenceV1p1beta1_WordInfo (Maybe Double)
- gcvvwicEndTime :: Lens' GoogleCloudVideointelligenceV1p1beta1_WordInfo (Maybe Scientific)
- gcvvwicWord :: Lens' GoogleCloudVideointelligenceV1p1beta1_WordInfo (Maybe Text)
- gcvvwicSpeakerTag :: Lens' GoogleCloudVideointelligenceV1p1beta1_WordInfo (Maybe Int32)
- data GoogleLongrunning_Operation
- googleLongrunning_Operation :: GoogleLongrunning_Operation
- gloDone :: Lens' GoogleLongrunning_Operation (Maybe Bool)
- gloError :: Lens' GoogleLongrunning_Operation (Maybe GoogleRpc_Status)
- gloResponse :: Lens' GoogleLongrunning_Operation (Maybe GoogleLongrunning_OperationResponse)
- gloName :: Lens' GoogleLongrunning_Operation (Maybe Text)
- gloMetadata :: Lens' GoogleLongrunning_Operation (Maybe GoogleLongrunning_OperationMetadata)
- data GoogleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative
- googleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative :: GoogleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative
- gcvvsracConfidence :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative (Maybe Double)
- gcvvsracWords :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative [GoogleCloudVideointelligenceV1p1beta1_WordInfo]
- gcvvsracTranscript :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative (Maybe Text)
- data GoogleCloudVideointelligenceV1_ExplicitContentFrame
- googleCloudVideointelligenceV1_ExplicitContentFrame :: GoogleCloudVideointelligenceV1_ExplicitContentFrame
- gTimeOffSet :: Lens' GoogleCloudVideointelligenceV1_ExplicitContentFrame (Maybe Scientific)
- gPornographyLikelihood :: Lens' GoogleCloudVideointelligenceV1_ExplicitContentFrame (Maybe GoogleCloudVideointelligenceV1_ExplicitContentFramePornographyLikelihood)
- data GoogleCloudVideointelligenceV1beta2_VideoSegment
- googleCloudVideointelligenceV1beta2_VideoSegment :: GoogleCloudVideointelligenceV1beta2_VideoSegment
- gStartTimeOffSet :: Lens' GoogleCloudVideointelligenceV1beta2_VideoSegment (Maybe Scientific)
- gEndTimeOffSet :: Lens' GoogleCloudVideointelligenceV1beta2_VideoSegment (Maybe Scientific)
- data GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults
- googleCloudVideointelligenceV1p2beta1_VideoAnnotationResults :: GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults
- gcvvvarsShotAnnotations :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p2beta1_VideoSegment]
- gcvvvarsShotLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation]
- gcvvvarsInputURI :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults (Maybe Text)
- gcvvvarsError :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults (Maybe GoogleRpc_Status)
- gcvvvarsObjectAnnotations :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation]
- gcvvvarsFrameLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation]
- gcvvvarsSpeechTranscriptions :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p2beta1_SpeechTranscription]
- gcvvvarsTextAnnotations :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p2beta1_TextAnnotation]
- gcvvvarsSegmentLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation]
- gcvvvarsExplicitAnnotation :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults (Maybe GoogleCloudVideointelligenceV1p2beta1_ExplicitContentAnnotation)
- data GoogleCloudVideointelligenceV1beta2_LabelSegment
- googleCloudVideointelligenceV1beta2_LabelSegment :: GoogleCloudVideointelligenceV1beta2_LabelSegment
- gcvvlscConfidence :: Lens' GoogleCloudVideointelligenceV1beta2_LabelSegment (Maybe Double)
- gcvvlscSegment :: Lens' GoogleCloudVideointelligenceV1beta2_LabelSegment (Maybe GoogleCloudVideointelligenceV1beta2_VideoSegment)
- data GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingPoly
- googleCloudVideointelligenceV1p2beta1_NormalizedBoundingPoly :: GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingPoly
- gcvvnbpVertices :: Lens' GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingPoly [GoogleCloudVideointelligenceV1p2beta1_NormalizedVertex]
- data GoogleCloudVideointelligenceV1beta2_WordInfo
- googleCloudVideointelligenceV1beta2_WordInfo :: GoogleCloudVideointelligenceV1beta2_WordInfo
- goooStartTime :: Lens' GoogleCloudVideointelligenceV1beta2_WordInfo (Maybe Scientific)
- goooConfidence :: Lens' GoogleCloudVideointelligenceV1beta2_WordInfo (Maybe Double)
- goooEndTime :: Lens' GoogleCloudVideointelligenceV1beta2_WordInfo (Maybe Scientific)
- goooWord :: Lens' GoogleCloudVideointelligenceV1beta2_WordInfo (Maybe Text)
- goooSpeakerTag :: Lens' GoogleCloudVideointelligenceV1beta2_WordInfo (Maybe Int32)
- data GoogleCloudVideointelligenceV1_ExplicitContentAnnotation
- googleCloudVideointelligenceV1_ExplicitContentAnnotation :: GoogleCloudVideointelligenceV1_ExplicitContentAnnotation
- gooFrames :: Lens' GoogleCloudVideointelligenceV1_ExplicitContentAnnotation [GoogleCloudVideointelligenceV1_ExplicitContentFrame]
- data GoogleCloudVideointelligenceV1_AnnotateVideoResponse
- googleCloudVideointelligenceV1_AnnotateVideoResponse :: GoogleCloudVideointelligenceV1_AnnotateVideoResponse
- gooAnnotationResults :: Lens' GoogleCloudVideointelligenceV1_AnnotateVideoResponse [GoogleCloudVideointelligenceV1_VideoAnnotationResults]
- data GoogleCloudVideointelligenceV1p2beta1_NormalizedVertex
- googleCloudVideointelligenceV1p2beta1_NormalizedVertex :: GoogleCloudVideointelligenceV1p2beta1_NormalizedVertex
- gcvvnvX :: Lens' GoogleCloudVideointelligenceV1p2beta1_NormalizedVertex (Maybe Double)
- gcvvnvY :: Lens' GoogleCloudVideointelligenceV1p2beta1_NormalizedVertex (Maybe Double)
- data GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation
- googleCloudVideointelligenceV1p2beta1_LabelAnnotation :: GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation
- gcvvlacCategoryEntities :: Lens' GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation [GoogleCloudVideointelligenceV1p2beta1_Entity]
- gcvvlacFrames :: Lens' GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation [GoogleCloudVideointelligenceV1p2beta1_LabelFrame]
- gcvvlacSegments :: Lens' GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation [GoogleCloudVideointelligenceV1p2beta1_LabelSegment]
- gcvvlacEntity :: Lens' GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation (Maybe GoogleCloudVideointelligenceV1p2beta1_Entity)
- data GoogleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative
- googleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative :: GoogleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative
- gcvvsra1Confidence :: Lens' GoogleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative (Maybe Double)
- gcvvsra1Words :: Lens' GoogleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative [GoogleCloudVideointelligenceV1beta2_WordInfo]
- gcvvsra1Transcript :: Lens' GoogleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative (Maybe Text)
- data GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFramePornographyLikelihood
- data GoogleCloudVideointelligenceV1p1beta1_VideoSegment
- googleCloudVideointelligenceV1p1beta1_VideoSegment :: GoogleCloudVideointelligenceV1p1beta1_VideoSegment
- gooStartTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoSegment (Maybe Scientific)
- gooEndTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoSegment (Maybe Scientific)
- data GoogleCloudVideointelligenceV1p1beta1_ExplicitContentDetectionConfig
- googleCloudVideointelligenceV1p1beta1_ExplicitContentDetectionConfig :: GoogleCloudVideointelligenceV1p1beta1_ExplicitContentDetectionConfig
- gcvvecdcModel :: Lens' GoogleCloudVideointelligenceV1p1beta1_ExplicitContentDetectionConfig (Maybe Text)
- data GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation
- googleCloudVideointelligenceV1p1beta1_LabelAnnotation :: GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation
- ggCategoryEntities :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation [GoogleCloudVideointelligenceV1p1beta1_Entity]
- ggFrames :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation [GoogleCloudVideointelligenceV1p1beta1_LabelFrame]
- ggSegments :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation [GoogleCloudVideointelligenceV1p1beta1_LabelSegment]
- ggEntity :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation (Maybe GoogleCloudVideointelligenceV1p1beta1_Entity)
- data GoogleCloudVideointelligenceV1_LabelFrame
- googleCloudVideointelligenceV1_LabelFrame :: GoogleCloudVideointelligenceV1_LabelFrame
- gcvvlf1TimeOffSet :: Lens' GoogleCloudVideointelligenceV1_LabelFrame (Maybe Scientific)
- gcvvlf1Confidence :: Lens' GoogleCloudVideointelligenceV1_LabelFrame (Maybe Double)
- data GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfigLabelDetectionMode
- data GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFramePornographyLikelihood
- data GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFrame
- googleCloudVideointelligenceV1p2beta1_ExplicitContentFrame :: GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFrame
- gooTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFrame (Maybe Scientific)
- gooPornographyLikelihood :: Lens' GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFrame (Maybe GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFramePornographyLikelihood)
- data GoogleCloudVideointelligenceV1_Entity
- googleCloudVideointelligenceV1_Entity :: GoogleCloudVideointelligenceV1_Entity
- gcvvecLanguageCode :: Lens' GoogleCloudVideointelligenceV1_Entity (Maybe Text)
- gcvvecEntityId :: Lens' GoogleCloudVideointelligenceV1_Entity (Maybe Text)
- gcvvecDescription :: Lens' GoogleCloudVideointelligenceV1_Entity (Maybe Text)
- data GoogleCloudVideointelligenceV1beta2_VideoAnnotationProgress
- googleCloudVideointelligenceV1beta2_VideoAnnotationProgress :: GoogleCloudVideointelligenceV1beta2_VideoAnnotationProgress
- gcvvvapsStartTime :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationProgress (Maybe UTCTime)
- gcvvvapsInputURI :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationProgress (Maybe Text)
- gcvvvapsProgressPercent :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationProgress (Maybe Int32)
- gcvvvapsUpdateTime :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationProgress (Maybe UTCTime)
- data GoogleCloudVideointelligenceV1beta2_SpeechTranscription
- googleCloudVideointelligenceV1beta2_SpeechTranscription :: GoogleCloudVideointelligenceV1beta2_SpeechTranscription
- gcvvstcAlternatives :: Lens' GoogleCloudVideointelligenceV1beta2_SpeechTranscription [GoogleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative]
- gcvvstcLanguageCode :: Lens' GoogleCloudVideointelligenceV1beta2_SpeechTranscription (Maybe Text)
- data GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults
- googleCloudVideointelligenceV1p1beta1_VideoAnnotationResults :: GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults
- gooShotAnnotations :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p1beta1_VideoSegment]
- gooShotLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation]
- gooInputURI :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults (Maybe Text)
- gooError :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults (Maybe GoogleRpc_Status)
- gooFrameLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation]
- gooSpeechTranscriptions :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p1beta1_SpeechTranscription]
- gooSegmentLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation]
- gooExplicitAnnotation :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults (Maybe GoogleCloudVideointelligenceV1p1beta1_ExplicitContentAnnotation)
- data Xgafv
- data GoogleCloudVideointelligenceV1_AnnotateVideoProgress
- googleCloudVideointelligenceV1_AnnotateVideoProgress :: GoogleCloudVideointelligenceV1_AnnotateVideoProgress
- gooAnnotationProgress :: Lens' GoogleCloudVideointelligenceV1_AnnotateVideoProgress [GoogleCloudVideointelligenceV1_VideoAnnotationProgress]
- data GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig
- googleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig :: GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig
- gcvvstccSpeechContexts :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig [GoogleCloudVideointelligenceV1p1beta1_SpeechContext]
- gcvvstccLanguageCode :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig (Maybe Text)
- gcvvstccAudioTracks :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig [Int32]
- gcvvstccEnableAutomaticPunctuation :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig (Maybe Bool)
- gcvvstccMaxAlternatives :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig (Maybe Int32)
- gcvvstccEnableSpeakerDiarization :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig (Maybe Bool)
- gcvvstccFilterProfanity :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig (Maybe Bool)
- gcvvstccDiarizationSpeakerCount :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig (Maybe Int32)
- gcvvstccEnableWordConfidence :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig (Maybe Bool)
- data GoogleCloudVideointelligenceV1beta2_ExplicitContentFramePornographyLikelihood
- data GoogleLongrunning_OperationResponse
- googleLongrunning_OperationResponse :: HashMap Text JSONValue -> GoogleLongrunning_OperationResponse
- glorAddtional :: Lens' GoogleLongrunning_OperationResponse (HashMap Text JSONValue)
- data GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress
- googleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress :: GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress
- gcvvvapcStartTime :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress (Maybe UTCTime)
- gcvvvapcInputURI :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress (Maybe Text)
- gcvvvapcProgressPercent :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress (Maybe Int32)
- gcvvvapcUpdateTime :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress (Maybe UTCTime)
- data GoogleCloudVideointelligenceV1p2beta1_TextFrame
- googleCloudVideointelligenceV1p2beta1_TextFrame :: GoogleCloudVideointelligenceV1p2beta1_TextFrame
- gcvvtfRotatedBoundingBox :: Lens' GoogleCloudVideointelligenceV1p2beta1_TextFrame (Maybe GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingPoly)
- gcvvtfTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p2beta1_TextFrame (Maybe Scientific)
- data GoogleCloudVideointelligenceV1beta2_LabelAnnotation
- googleCloudVideointelligenceV1beta2_LabelAnnotation :: GoogleCloudVideointelligenceV1beta2_LabelAnnotation
- goooCategoryEntities :: Lens' GoogleCloudVideointelligenceV1beta2_LabelAnnotation [GoogleCloudVideointelligenceV1beta2_Entity]
- goooFrames :: Lens' GoogleCloudVideointelligenceV1beta2_LabelAnnotation [GoogleCloudVideointelligenceV1beta2_LabelFrame]
- goooSegments :: Lens' GoogleCloudVideointelligenceV1beta2_LabelAnnotation [GoogleCloudVideointelligenceV1beta2_LabelSegment]
- goooEntity :: Lens' GoogleCloudVideointelligenceV1beta2_LabelAnnotation (Maybe GoogleCloudVideointelligenceV1beta2_Entity)
- data GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfig
- googleCloudVideointelligenceV1p1beta1_LabelDetectionConfig :: GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfig
- gcvvldcLabelDetectionMode :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfig (Maybe GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfigLabelDetectionMode)
- gcvvldcStationaryCamera :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfig (Maybe Bool)
- gcvvldcModel :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfig (Maybe Text)
- data GoogleCloudVideointelligenceV1p1beta1_SpeechTranscription
- googleCloudVideointelligenceV1p1beta1_SpeechTranscription :: GoogleCloudVideointelligenceV1p1beta1_SpeechTranscription
- ggAlternatives :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscription [GoogleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative]
- ggLanguageCode :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscription (Maybe Text)
- data GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults
- googleCloudVideointelligenceV1beta2_VideoAnnotationResults :: GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults
- gcvvvarcShotAnnotations :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults [GoogleCloudVideointelligenceV1beta2_VideoSegment]
- gcvvvarcShotLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults [GoogleCloudVideointelligenceV1beta2_LabelAnnotation]
- gcvvvarcInputURI :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults (Maybe Text)
- gcvvvarcError :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults (Maybe GoogleRpc_Status)
- gcvvvarcFrameLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults [GoogleCloudVideointelligenceV1beta2_LabelAnnotation]
- gcvvvarcSpeechTranscriptions :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults [GoogleCloudVideointelligenceV1beta2_SpeechTranscription]
- gcvvvarcSegmentLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults [GoogleCloudVideointelligenceV1beta2_LabelAnnotation]
- gcvvvarcExplicitAnnotation :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults (Maybe GoogleCloudVideointelligenceV1beta2_ExplicitContentAnnotation)
- data GoogleCloudVideointelligenceV1p2beta1_LabelSegment
- googleCloudVideointelligenceV1p2beta1_LabelSegment :: GoogleCloudVideointelligenceV1p2beta1_LabelSegment
- gcvvls1Confidence :: Lens' GoogleCloudVideointelligenceV1p2beta1_LabelSegment (Maybe Double)
- gcvvls1Segment :: Lens' GoogleCloudVideointelligenceV1p2beta1_LabelSegment (Maybe GoogleCloudVideointelligenceV1p2beta1_VideoSegment)
- data GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox
- googleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox :: GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox
- gcvvnbbBottom :: Lens' GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox (Maybe Double)
- gcvvnbbLeft :: Lens' GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox (Maybe Double)
- gcvvnbbRight :: Lens' GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox (Maybe Double)
- gcvvnbbTop :: Lens' GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox (Maybe Double)
- data GoogleCloudVideointelligenceV1p2beta1_TextSegment
- googleCloudVideointelligenceV1p2beta1_TextSegment :: GoogleCloudVideointelligenceV1p2beta1_TextSegment
- gcvvtsFrames :: Lens' GoogleCloudVideointelligenceV1p2beta1_TextSegment [GoogleCloudVideointelligenceV1p2beta1_TextFrame]
- gcvvtsConfidence :: Lens' GoogleCloudVideointelligenceV1p2beta1_TextSegment (Maybe Double)
- gcvvtsSegment :: Lens' GoogleCloudVideointelligenceV1p2beta1_TextSegment (Maybe GoogleCloudVideointelligenceV1p2beta1_VideoSegment)
- data GoogleCloudVideointelligenceV1p2beta1_SpeechTranscription
- googleCloudVideointelligenceV1p2beta1_SpeechTranscription :: GoogleCloudVideointelligenceV1p2beta1_SpeechTranscription
- goooAlternatives :: Lens' GoogleCloudVideointelligenceV1p2beta1_SpeechTranscription [GoogleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative]
- goooLanguageCode :: Lens' GoogleCloudVideointelligenceV1p2beta1_SpeechTranscription (Maybe Text)
- data GoogleRpc_Status
- googleRpc_Status :: GoogleRpc_Status
- grsDetails :: Lens' GoogleRpc_Status [GoogleRpc_StatusDetailsItem]
- grsCode :: Lens' GoogleRpc_Status (Maybe Int32)
- grsMessage :: Lens' GoogleRpc_Status (Maybe Text)
- data GoogleCloudVideointelligenceV1_VideoSegment
- googleCloudVideointelligenceV1_VideoSegment :: GoogleCloudVideointelligenceV1_VideoSegment
- gcvvvscStartTimeOffSet :: Lens' GoogleCloudVideointelligenceV1_VideoSegment (Maybe Scientific)
- gcvvvscEndTimeOffSet :: Lens' GoogleCloudVideointelligenceV1_VideoSegment (Maybe Scientific)
- data GoogleCloudVideointelligenceV1p1beta1_ExplicitContentAnnotation
- googleCloudVideointelligenceV1p1beta1_ExplicitContentAnnotation :: GoogleCloudVideointelligenceV1p1beta1_ExplicitContentAnnotation
- gcvvecacFrames :: Lens' GoogleCloudVideointelligenceV1p1beta1_ExplicitContentAnnotation [GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFrame]
- data GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoResponse
- googleCloudVideointelligenceV1p1beta1_AnnotateVideoResponse :: GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoResponse
- gcvvavrcAnnotationResults :: Lens' GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoResponse [GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults]
- data GoogleCloudVideointelligenceV1beta2_ExplicitContentFrame
- googleCloudVideointelligenceV1beta2_ExplicitContentFrame :: GoogleCloudVideointelligenceV1beta2_ExplicitContentFrame
- gcvvecfcTimeOffSet :: Lens' GoogleCloudVideointelligenceV1beta2_ExplicitContentFrame (Maybe Scientific)
- gcvvecfcPornographyLikelihood :: Lens' GoogleCloudVideointelligenceV1beta2_ExplicitContentFrame (Maybe GoogleCloudVideointelligenceV1beta2_ExplicitContentFramePornographyLikelihood)
- data GoogleCloudVideointelligenceV1p1beta1_SpeechContext
- googleCloudVideointelligenceV1p1beta1_SpeechContext :: GoogleCloudVideointelligenceV1p1beta1_SpeechContext
- gcvvscPhrases :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechContext [Text]
- data GoogleCloudVideointelligenceV1_LabelSegment
- googleCloudVideointelligenceV1_LabelSegment :: GoogleCloudVideointelligenceV1_LabelSegment
- g2Confidence :: Lens' GoogleCloudVideointelligenceV1_LabelSegment (Maybe Double)
- g2Segment :: Lens' GoogleCloudVideointelligenceV1_LabelSegment (Maybe GoogleCloudVideointelligenceV1_VideoSegment)
Service Configuration
videoIntelligenceService :: ServiceConfig Source #
Default request referring to version v1p1beta1
of the Cloud Video Intelligence API. This contains the host and root path used as a starting point for constructing service requests.
OAuth Scopes
cloudPlatformScope :: Proxy '["https://www.googleapis.com/auth/cloud-platform"] Source #
View and manage your data across Google Cloud Platform services
API Declaration
type VideoIntelligenceAPI = VideosAnnotateResource Source #
Represents the entirety of the methods and resources available for the Cloud Video Intelligence API service.
Resources
videointelligence.videos.annotate
Types
GoogleRpc_StatusDetailsItem
data GoogleRpc_StatusDetailsItem Source #
Instances
googleRpc_StatusDetailsItem Source #
Creates a value of GoogleRpc_StatusDetailsItem
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
grsdiAddtional :: Lens' GoogleRpc_StatusDetailsItem (HashMap Text JSONValue) Source #
Properties of the object. Contains field 'type with type URL.
GoogleCloudVideointelligenceV1beta2_ExplicitContentAnnotation
data GoogleCloudVideointelligenceV1beta2_ExplicitContentAnnotation Source #
Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame.
See: googleCloudVideointelligenceV1beta2_ExplicitContentAnnotation
smart constructor.
Instances
googleCloudVideointelligenceV1beta2_ExplicitContentAnnotation :: GoogleCloudVideointelligenceV1beta2_ExplicitContentAnnotation Source #
Creates a value of GoogleCloudVideointelligenceV1beta2_ExplicitContentAnnotation
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvecaFrames :: Lens' GoogleCloudVideointelligenceV1beta2_ExplicitContentAnnotation [GoogleCloudVideointelligenceV1beta2_ExplicitContentFrame] Source #
All video frames where explicit content was detected.
GoogleCloudVideointelligenceV1_SpeechRecognitionAlternative
data GoogleCloudVideointelligenceV1_SpeechRecognitionAlternative Source #
Alternative hypotheses (a.k.a. n-best list).
See: googleCloudVideointelligenceV1_SpeechRecognitionAlternative
smart constructor.
Instances
googleCloudVideointelligenceV1_SpeechRecognitionAlternative :: GoogleCloudVideointelligenceV1_SpeechRecognitionAlternative Source #
Creates a value of GoogleCloudVideointelligenceV1_SpeechRecognitionAlternative
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvsraConfidence :: Lens' GoogleCloudVideointelligenceV1_SpeechRecognitionAlternative (Maybe Double) Source #
The confidence estimate between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. This field is typically provided only for the top hypothesis, and only for `is_final=true` results. Clients should not rely on the `confidence` field as it is not guaranteed to be accurate or consistent. The default of 0.0 is a sentinel value indicating `confidence` was not set.
gcvvsraWords :: Lens' GoogleCloudVideointelligenceV1_SpeechRecognitionAlternative [GoogleCloudVideointelligenceV1_WordInfo] Source #
A list of word-specific information for each recognized word.
gcvvsraTranscript :: Lens' GoogleCloudVideointelligenceV1_SpeechRecognitionAlternative (Maybe Text) Source #
Transcript text representing the words that the user spoke.
GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest
data GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest Source #
Video annotation request.
See: googleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest :: GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvavrInputURI :: Lens' GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest (Maybe Text) Source #
Input video location. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: `gs://bucket-id/object-id` (other URI formats return google.rpc.Code.INVALID_ARGUMENT). For more information, see Request URIs. A video URI may include wildcards in `object-id`, and thus identify multiple videos. Supported wildcards: '*' to match 0 or more characters; '?' to match 1 character. If unset, the input video should be embedded in the request as `input_content`. If set, `input_content` should be unset.
gcvvavrVideoContext :: Lens' GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest (Maybe GoogleCloudVideointelligenceV1p1beta1_VideoContext) Source #
Additional video context and/or feature-specific parameters.
gcvvavrInputContent :: Lens' GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest (Maybe ByteString) Source #
The video data bytes. If unset, the input video(s) should be specified via `input_uri`. If set, `input_uri` should be unset.
gcvvavrFeatures :: Lens' GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest [Text] Source #
Requested video annotation features.
gcvvavrLocationId :: Lens' GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest (Maybe Text) Source #
Optional cloud region where annotation should take place. Supported cloud regions: `us-east1`, `us-west1`, `europe-west1`, `asia-east1`. If no region is specified, a region will be determined based on video file location.
gcvvavrOutputURI :: Lens' GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoRequest (Maybe Text) Source #
Optional location where the output (in JSON format) should be stored. Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: `gs://bucket-id/object-id` (other URI formats return google.rpc.Code.INVALID_ARGUMENT). For more information, see Request URIs.
GoogleCloudVideointelligenceV1beta2_AnnotateVideoResponse
data GoogleCloudVideointelligenceV1beta2_AnnotateVideoResponse Source #
Video annotation response. Included in the `response` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service.
See: googleCloudVideointelligenceV1beta2_AnnotateVideoResponse
smart constructor.
Instances
googleCloudVideointelligenceV1beta2_AnnotateVideoResponse :: GoogleCloudVideointelligenceV1beta2_AnnotateVideoResponse Source #
Creates a value of GoogleCloudVideointelligenceV1beta2_AnnotateVideoResponse
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvavrAnnotationResults :: Lens' GoogleCloudVideointelligenceV1beta2_AnnotateVideoResponse [GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults] Source #
Annotation results for all videos specified in `AnnotateVideoRequest`.
GoogleCloudVideointelligenceV1_WordInfo
data GoogleCloudVideointelligenceV1_WordInfo Source #
Word-specific information for recognized words. Word information is only included in the response when certain request parameters are set, such as `enable_word_time_offsets`.
See: googleCloudVideointelligenceV1_WordInfo
smart constructor.
Instances
googleCloudVideointelligenceV1_WordInfo :: GoogleCloudVideointelligenceV1_WordInfo Source #
Creates a value of GoogleCloudVideointelligenceV1_WordInfo
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvwiStartTime :: Lens' GoogleCloudVideointelligenceV1_WordInfo (Maybe Scientific) Source #
Time offset relative to the beginning of the audio, and corresponding to the start of the spoken word. This field is only set if `enable_word_time_offsets=true` and only in the top hypothesis. This is an experimental feature and the accuracy of the time offset can vary.
gcvvwiConfidence :: Lens' GoogleCloudVideointelligenceV1_WordInfo (Maybe Double) Source #
Output only. The confidence estimate between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. This field is set only for the top alternative. This field is not guaranteed to be accurate and users should not rely on it to be always provided. The default of 0.0 is a sentinel value indicating `confidence` was not set.
gcvvwiEndTime :: Lens' GoogleCloudVideointelligenceV1_WordInfo (Maybe Scientific) Source #
Time offset relative to the beginning of the audio, and corresponding to the end of the spoken word. This field is only set if `enable_word_time_offsets=true` and only in the top hypothesis. This is an experimental feature and the accuracy of the time offset can vary.
gcvvwiWord :: Lens' GoogleCloudVideointelligenceV1_WordInfo (Maybe Text) Source #
The word corresponding to this set of information.
gcvvwiSpeakerTag :: Lens' GoogleCloudVideointelligenceV1_WordInfo (Maybe Int32) Source #
Output only. A distinct integer value is assigned for every speaker within the audio. This field specifies which one of those speakers was detected to have spoken this word. Value ranges from 1 up to diarization_speaker_count, and is only set if speaker diarization is enabled.
GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFrame
data GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFrame Source #
Video frame level annotation results for explicit content.
See: googleCloudVideointelligenceV1p1beta1_ExplicitContentFrame
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_ExplicitContentFrame :: GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFrame Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFrame
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvecfTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFrame (Maybe Scientific) Source #
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
gcvvecfPornographyLikelihood :: Lens' GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFrame (Maybe GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFramePornographyLikelihood) Source #
Likelihood of the pornography content..
GoogleCloudVideointelligenceV1beta2_Entity
data GoogleCloudVideointelligenceV1beta2_Entity Source #
Detected entity from video analysis.
See: googleCloudVideointelligenceV1beta2_Entity
smart constructor.
Instances
googleCloudVideointelligenceV1beta2_Entity :: GoogleCloudVideointelligenceV1beta2_Entity Source #
Creates a value of GoogleCloudVideointelligenceV1beta2_Entity
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvveLanguageCode :: Lens' GoogleCloudVideointelligenceV1beta2_Entity (Maybe Text) Source #
Language code for `description` in BCP-47 format.
gcvveEntityId :: Lens' GoogleCloudVideointelligenceV1beta2_Entity (Maybe Text) Source #
Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
gcvveDescription :: Lens' GoogleCloudVideointelligenceV1beta2_Entity (Maybe Text) Source #
Textual description, e.g. `Fixed-gear bicycle`.
GoogleCloudVideointelligenceV1p2beta1_TextAnnotation
data GoogleCloudVideointelligenceV1p2beta1_TextAnnotation Source #
Annotations related to one detected OCR text snippet. This will contain the corresponding text, confidence value, and frame level information for each detection.
See: googleCloudVideointelligenceV1p2beta1_TextAnnotation
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_TextAnnotation :: GoogleCloudVideointelligenceV1p2beta1_TextAnnotation Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_TextAnnotation
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvtaText :: Lens' GoogleCloudVideointelligenceV1p2beta1_TextAnnotation (Maybe Text) Source #
The detected text.
gcvvtaSegments :: Lens' GoogleCloudVideointelligenceV1p2beta1_TextAnnotation [GoogleCloudVideointelligenceV1p2beta1_TextSegment] Source #
All video segments where OCR detected text appears.
GoogleCloudVideointelligenceV1p2beta1_VideoSegment
data GoogleCloudVideointelligenceV1p2beta1_VideoSegment Source #
Video segment.
See: googleCloudVideointelligenceV1p2beta1_VideoSegment
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_VideoSegment :: GoogleCloudVideointelligenceV1p2beta1_VideoSegment Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_VideoSegment
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvvsStartTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoSegment (Maybe Scientific) Source #
Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive).
gcvvvsEndTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoSegment (Maybe Scientific) Source #
Time-offset, relative to the beginning of the video, corresponding to the end of the segment (inclusive).
GoogleCloudVideointelligenceV1_VideoAnnotationProgress
data GoogleCloudVideointelligenceV1_VideoAnnotationProgress Source #
Annotation progress for a single video.
See: googleCloudVideointelligenceV1_VideoAnnotationProgress
smart constructor.
Instances
googleCloudVideointelligenceV1_VideoAnnotationProgress :: GoogleCloudVideointelligenceV1_VideoAnnotationProgress Source #
Creates a value of GoogleCloudVideointelligenceV1_VideoAnnotationProgress
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvvapStartTime :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationProgress (Maybe UTCTime) Source #
Time when the request was received.
gcvvvapInputURI :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationProgress (Maybe Text) Source #
Video file location in Google Cloud Storage.
gcvvvapProgressPercent :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationProgress (Maybe Int32) Source #
Approximate percentage processed thus far. Guaranteed to be 100 when fully processed.
gcvvvapUpdateTime :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationProgress (Maybe UTCTime) Source #
Time of the most recent update.
GoogleCloudVideointelligenceV1beta2_LabelFrame
data GoogleCloudVideointelligenceV1beta2_LabelFrame Source #
Video frame level annotation results for label detection.
See: googleCloudVideointelligenceV1beta2_LabelFrame
smart constructor.
Instances
googleCloudVideointelligenceV1beta2_LabelFrame :: GoogleCloudVideointelligenceV1beta2_LabelFrame Source #
Creates a value of GoogleCloudVideointelligenceV1beta2_LabelFrame
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvlfTimeOffSet :: Lens' GoogleCloudVideointelligenceV1beta2_LabelFrame (Maybe Scientific) Source #
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
gcvvlfConfidence :: Lens' GoogleCloudVideointelligenceV1beta2_LabelFrame (Maybe Double) Source #
Confidence that the label is accurate. Range: [0, 1].
GoogleCloudVideointelligenceV1_SpeechTranscription
data GoogleCloudVideointelligenceV1_SpeechTranscription Source #
A speech recognition result corresponding to a portion of the audio.
See: googleCloudVideointelligenceV1_SpeechTranscription
smart constructor.
Instances
googleCloudVideointelligenceV1_SpeechTranscription :: GoogleCloudVideointelligenceV1_SpeechTranscription Source #
Creates a value of GoogleCloudVideointelligenceV1_SpeechTranscription
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvstAlternatives :: Lens' GoogleCloudVideointelligenceV1_SpeechTranscription [GoogleCloudVideointelligenceV1_SpeechRecognitionAlternative] Source #
May contain one or more recognition hypotheses (up to the maximum specified in `max_alternatives`). These alternatives are ordered in terms of accuracy, with the top (first) alternative being the most probable, as ranked by the recognizer.
gcvvstLanguageCode :: Lens' GoogleCloudVideointelligenceV1_SpeechTranscription (Maybe Text) Source #
Output only. The BCP-47 language tag of the language in this result. This language code was detected to have the most likelihood of being spoken in the audio.
GoogleCloudVideointelligenceV1beta2_AnnotateVideoProgress
data GoogleCloudVideointelligenceV1beta2_AnnotateVideoProgress Source #
Video annotation progress. Included in the `metadata` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service.
See: googleCloudVideointelligenceV1beta2_AnnotateVideoProgress
smart constructor.
Instances
googleCloudVideointelligenceV1beta2_AnnotateVideoProgress :: GoogleCloudVideointelligenceV1beta2_AnnotateVideoProgress Source #
Creates a value of GoogleCloudVideointelligenceV1beta2_AnnotateVideoProgress
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvavpAnnotationProgress :: Lens' GoogleCloudVideointelligenceV1beta2_AnnotateVideoProgress [GoogleCloudVideointelligenceV1beta2_VideoAnnotationProgress] Source #
Progress metadata for all videos specified in `AnnotateVideoRequest`.
GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingFrame
data GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingFrame Source #
Video frame level annotations for object detection and tracking. This field stores per frame location, time offset, and confidence.
See: googleCloudVideointelligenceV1p2beta1_ObjectTrackingFrame
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_ObjectTrackingFrame :: GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingFrame Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingFrame
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvotfTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingFrame (Maybe Scientific) Source #
The timestamp of the frame in microseconds.
gcvvotfNormalizedBoundingBox :: Lens' GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingFrame (Maybe GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox) Source #
The normalized bounding box location of this object track for the frame.
GoogleCloudVideointelligenceV1_LabelAnnotation
data GoogleCloudVideointelligenceV1_LabelAnnotation Source #
Label annotation.
See: googleCloudVideointelligenceV1_LabelAnnotation
smart constructor.
Instances
googleCloudVideointelligenceV1_LabelAnnotation :: GoogleCloudVideointelligenceV1_LabelAnnotation Source #
Creates a value of GoogleCloudVideointelligenceV1_LabelAnnotation
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvlaCategoryEntities :: Lens' GoogleCloudVideointelligenceV1_LabelAnnotation [GoogleCloudVideointelligenceV1_Entity] Source #
Common categories for the detected entity. E.g. when the label is `Terrier` the category is likely `dog`. And in some cases there might be more than one categories e.g. `Terrier` could also be a `pet`.
gcvvlaFrames :: Lens' GoogleCloudVideointelligenceV1_LabelAnnotation [GoogleCloudVideointelligenceV1_LabelFrame] Source #
All video frames where a label was detected.
gcvvlaSegments :: Lens' GoogleCloudVideointelligenceV1_LabelAnnotation [GoogleCloudVideointelligenceV1_LabelSegment] Source #
All video segments where a label was detected.
gcvvlaEntity :: Lens' GoogleCloudVideointelligenceV1_LabelAnnotation (Maybe GoogleCloudVideointelligenceV1_Entity) Source #
Detected entity.
GoogleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative
data GoogleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative Source #
Alternative hypotheses (a.k.a. n-best list).
See: googleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative :: GoogleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gConfidence :: Lens' GoogleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative (Maybe Double) Source #
The confidence estimate between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. This field is typically provided only for the top hypothesis, and only for `is_final=true` results. Clients should not rely on the `confidence` field as it is not guaranteed to be accurate or consistent. The default of 0.0 is a sentinel value indicating `confidence` was not set.
gWords :: Lens' GoogleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative [GoogleCloudVideointelligenceV1p2beta1_WordInfo] Source #
A list of word-specific information for each recognized word.
gTranscript :: Lens' GoogleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative (Maybe Text) Source #
Transcript text representing the words that the user spoke.
GoogleCloudVideointelligenceV1p2beta1_WordInfo
data GoogleCloudVideointelligenceV1p2beta1_WordInfo Source #
Word-specific information for recognized words. Word information is only included in the response when certain request parameters are set, such as `enable_word_time_offsets`.
See: googleCloudVideointelligenceV1p2beta1_WordInfo
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_WordInfo :: GoogleCloudVideointelligenceV1p2beta1_WordInfo Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_WordInfo
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gooStartTime :: Lens' GoogleCloudVideointelligenceV1p2beta1_WordInfo (Maybe Scientific) Source #
Time offset relative to the beginning of the audio, and corresponding to the start of the spoken word. This field is only set if `enable_word_time_offsets=true` and only in the top hypothesis. This is an experimental feature and the accuracy of the time offset can vary.
gooConfidence :: Lens' GoogleCloudVideointelligenceV1p2beta1_WordInfo (Maybe Double) Source #
Output only. The confidence estimate between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. This field is set only for the top alternative. This field is not guaranteed to be accurate and users should not rely on it to be always provided. The default of 0.0 is a sentinel value indicating `confidence` was not set.
gooEndTime :: Lens' GoogleCloudVideointelligenceV1p2beta1_WordInfo (Maybe Scientific) Source #
Time offset relative to the beginning of the audio, and corresponding to the end of the spoken word. This field is only set if `enable_word_time_offsets=true` and only in the top hypothesis. This is an experimental feature and the accuracy of the time offset can vary.
gooWord :: Lens' GoogleCloudVideointelligenceV1p2beta1_WordInfo (Maybe Text) Source #
The word corresponding to this set of information.
gooSpeakerTag :: Lens' GoogleCloudVideointelligenceV1p2beta1_WordInfo (Maybe Int32) Source #
Output only. A distinct integer value is assigned for every speaker within the audio. This field specifies which one of those speakers was detected to have spoken this word. Value ranges from 1 up to diarization_speaker_count, and is only set if speaker diarization is enabled.
GoogleCloudVideointelligenceV1p1beta1_LabelFrame
data GoogleCloudVideointelligenceV1p1beta1_LabelFrame Source #
Video frame level annotation results for label detection.
See: googleCloudVideointelligenceV1p1beta1_LabelFrame
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_LabelFrame :: GoogleCloudVideointelligenceV1p1beta1_LabelFrame Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_LabelFrame
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvlfcTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelFrame (Maybe Scientific) Source #
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
gcvvlfcConfidence :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelFrame (Maybe Double) Source #
Confidence that the label is accurate. Range: [0, 1].
GoogleCloudVideointelligenceV1p1beta1_ShotChangeDetectionConfig
data GoogleCloudVideointelligenceV1p1beta1_ShotChangeDetectionConfig Source #
Config for SHOT_CHANGE_DETECTION.
See: googleCloudVideointelligenceV1p1beta1_ShotChangeDetectionConfig
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_ShotChangeDetectionConfig :: GoogleCloudVideointelligenceV1p1beta1_ShotChangeDetectionConfig Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_ShotChangeDetectionConfig
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvscdcModel :: Lens' GoogleCloudVideointelligenceV1p1beta1_ShotChangeDetectionConfig (Maybe Text) Source #
Model to use for shot change detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".
GoogleCloudVideointelligenceV1p2beta1_ExplicitContentAnnotation
data GoogleCloudVideointelligenceV1p2beta1_ExplicitContentAnnotation Source #
Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame.
See: googleCloudVideointelligenceV1p2beta1_ExplicitContentAnnotation
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_ExplicitContentAnnotation :: GoogleCloudVideointelligenceV1p2beta1_ExplicitContentAnnotation Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_ExplicitContentAnnotation
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gFrames :: Lens' GoogleCloudVideointelligenceV1p2beta1_ExplicitContentAnnotation [GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFrame] Source #
All video frames where explicit content was detected.
GoogleCloudVideointelligenceV1_ExplicitContentFramePornographyLikelihood
data GoogleCloudVideointelligenceV1_ExplicitContentFramePornographyLikelihood Source #
Likelihood of the pornography content..
LikelihoodUnspecified |
|
VeryUnlikely |
|
Unlikely |
|
Possible |
|
Likely |
|
VeryLikely |
|
Instances
GoogleCloudVideointelligenceV1p1beta1_Entity
data GoogleCloudVideointelligenceV1p1beta1_Entity Source #
Detected entity from video analysis.
See: googleCloudVideointelligenceV1p1beta1_Entity
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_Entity :: GoogleCloudVideointelligenceV1p1beta1_Entity Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_Entity
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gLanguageCode :: Lens' GoogleCloudVideointelligenceV1p1beta1_Entity (Maybe Text) Source #
Language code for `description` in BCP-47 format.
gEntityId :: Lens' GoogleCloudVideointelligenceV1p1beta1_Entity (Maybe Text) Source #
Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
gDescription :: Lens' GoogleCloudVideointelligenceV1p1beta1_Entity (Maybe Text) Source #
Textual description, e.g. `Fixed-gear bicycle`.
GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoResponse
data GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoResponse Source #
Video annotation response. Included in the `response` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service.
See: googleCloudVideointelligenceV1p2beta1_AnnotateVideoResponse
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_AnnotateVideoResponse :: GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoResponse Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoResponse
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gAnnotationResults :: Lens' GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoResponse [GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults] Source #
Annotation results for all videos specified in `AnnotateVideoRequest`.
GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoProgress
data GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoProgress Source #
Video annotation progress. Included in the `metadata` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service.
See: googleCloudVideointelligenceV1p1beta1_AnnotateVideoProgress
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_AnnotateVideoProgress :: GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoProgress Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoProgress
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gAnnotationProgress :: Lens' GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoProgress [GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress] Source #
Progress metadata for all videos specified in `AnnotateVideoRequest`.
GoogleCloudVideointelligenceV1_VideoAnnotationResults
data GoogleCloudVideointelligenceV1_VideoAnnotationResults Source #
Annotation results for a single video.
See: googleCloudVideointelligenceV1_VideoAnnotationResults
smart constructor.
Instances
googleCloudVideointelligenceV1_VideoAnnotationResults :: GoogleCloudVideointelligenceV1_VideoAnnotationResults Source #
Creates a value of GoogleCloudVideointelligenceV1_VideoAnnotationResults
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvvarShotAnnotations :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationResults [GoogleCloudVideointelligenceV1_VideoSegment] Source #
Shot annotations. Each shot is represented as a video segment.
gcvvvarShotLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationResults [GoogleCloudVideointelligenceV1_LabelAnnotation] Source #
Label annotations on shot level. There is exactly one element for each unique label.
gcvvvarInputURI :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationResults (Maybe Text) Source #
Video file location in Google Cloud Storage.
gcvvvarError :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationResults (Maybe GoogleRpc_Status) Source #
If set, indicates an error. Note that for a single `AnnotateVideoRequest` some videos may succeed and some may fail.
gcvvvarFrameLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationResults [GoogleCloudVideointelligenceV1_LabelAnnotation] Source #
Label annotations on frame level. There is exactly one element for each unique label.
gcvvvarSpeechTranscriptions :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationResults [GoogleCloudVideointelligenceV1_SpeechTranscription] Source #
Speech transcription.
gcvvvarSegmentLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationResults [GoogleCloudVideointelligenceV1_LabelAnnotation] Source #
Label annotations on video level or user specified segment level. There is exactly one element for each unique label.
gcvvvarExplicitAnnotation :: Lens' GoogleCloudVideointelligenceV1_VideoAnnotationResults (Maybe GoogleCloudVideointelligenceV1_ExplicitContentAnnotation) Source #
Explicit content annotation.
GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation
data GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation Source #
Annotations corresponding to one tracked object.
See: googleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation :: GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvotaFrames :: Lens' GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation [GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingFrame] Source #
Information corresponding to all frames where this object track appears.
gcvvotaConfidence :: Lens' GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation (Maybe Double) Source #
Object category's labeling confidence of this track.
gcvvotaSegment :: Lens' GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation (Maybe GoogleCloudVideointelligenceV1p2beta1_VideoSegment) Source #
Each object track corresponds to one video segment where it appears.
gcvvotaEntity :: Lens' GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation (Maybe GoogleCloudVideointelligenceV1p2beta1_Entity) Source #
Entity to specify the object category that this track is labeled as.
GoogleCloudVideointelligenceV1p1beta1_VideoContext
data GoogleCloudVideointelligenceV1p1beta1_VideoContext Source #
Video context and/or feature-specific parameters.
See: googleCloudVideointelligenceV1p1beta1_VideoContext
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_VideoContext :: GoogleCloudVideointelligenceV1p1beta1_VideoContext Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_VideoContext
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvvcSpeechTranscriptionConfig :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoContext (Maybe GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig) Source #
Config for SPEECH_TRANSCRIPTION.
gcvvvcExplicitContentDetectionConfig :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoContext (Maybe GoogleCloudVideointelligenceV1p1beta1_ExplicitContentDetectionConfig) Source #
Config for EXPLICIT_CONTENT_DETECTION.
gcvvvcLabelDetectionConfig :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoContext (Maybe GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfig) Source #
Config for LABEL_DETECTION.
gcvvvcSegments :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoContext [GoogleCloudVideointelligenceV1p1beta1_VideoSegment] Source #
Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment.
gcvvvcShotChangeDetectionConfig :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoContext (Maybe GoogleCloudVideointelligenceV1p1beta1_ShotChangeDetectionConfig) Source #
Config for SHOT_CHANGE_DETECTION.
GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoProgress
data GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoProgress Source #
Video annotation progress. Included in the `metadata` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service.
See: googleCloudVideointelligenceV1p2beta1_AnnotateVideoProgress
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_AnnotateVideoProgress :: GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoProgress Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoProgress
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvavpsAnnotationProgress :: Lens' GoogleCloudVideointelligenceV1p2beta1_AnnotateVideoProgress [GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress] Source #
Progress metadata for all videos specified in `AnnotateVideoRequest`.
GoogleLongrunning_OperationMetadata
data GoogleLongrunning_OperationMetadata Source #
Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
See: googleLongrunning_OperationMetadata
smart constructor.
Instances
googleLongrunning_OperationMetadata Source #
Creates a value of GoogleLongrunning_OperationMetadata
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
glomAddtional :: Lens' GoogleLongrunning_OperationMetadata (HashMap Text JSONValue) Source #
Properties of the object. Contains field 'type with type URL.
GoogleCloudVideointelligenceV1p1beta1_LabelSegment
data GoogleCloudVideointelligenceV1p1beta1_LabelSegment Source #
Video segment level annotation results for label detection.
See: googleCloudVideointelligenceV1p1beta1_LabelSegment
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_LabelSegment :: GoogleCloudVideointelligenceV1p1beta1_LabelSegment Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_LabelSegment
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvlsConfidence :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelSegment (Maybe Double) Source #
Confidence that the label is accurate. Range: [0, 1].
gcvvlsSegment :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelSegment (Maybe GoogleCloudVideointelligenceV1p1beta1_VideoSegment) Source #
Video segment where a label was detected.
GoogleCloudVideointelligenceV1p2beta1_LabelFrame
data GoogleCloudVideointelligenceV1p2beta1_LabelFrame Source #
Video frame level annotation results for label detection.
See: googleCloudVideointelligenceV1p2beta1_LabelFrame
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_LabelFrame :: GoogleCloudVideointelligenceV1p2beta1_LabelFrame Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_LabelFrame
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
ggTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p2beta1_LabelFrame (Maybe Scientific) Source #
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
ggConfidence :: Lens' GoogleCloudVideointelligenceV1p2beta1_LabelFrame (Maybe Double) Source #
Confidence that the label is accurate. Range: [0, 1].
GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress
data GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress Source #
Annotation progress for a single video.
See: googleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress :: GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gStartTime :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress (Maybe UTCTime) Source #
Time when the request was received.
gInputURI :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress (Maybe Text) Source #
Video file location in Google Cloud Storage.
gProgressPercent :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress (Maybe Int32) Source #
Approximate percentage processed thus far. Guaranteed to be 100 when fully processed.
gUpdateTime :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationProgress (Maybe UTCTime) Source #
Time of the most recent update.
GoogleCloudVideointelligenceV1p2beta1_Entity
data GoogleCloudVideointelligenceV1p2beta1_Entity Source #
Detected entity from video analysis.
See: googleCloudVideointelligenceV1p2beta1_Entity
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_Entity :: GoogleCloudVideointelligenceV1p2beta1_Entity Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_Entity
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gooLanguageCode :: Lens' GoogleCloudVideointelligenceV1p2beta1_Entity (Maybe Text) Source #
Language code for `description` in BCP-47 format.
gooEntityId :: Lens' GoogleCloudVideointelligenceV1p2beta1_Entity (Maybe Text) Source #
Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
gooDescription :: Lens' GoogleCloudVideointelligenceV1p2beta1_Entity (Maybe Text) Source #
Textual description, e.g. `Fixed-gear bicycle`.
GoogleCloudVideointelligenceV1p1beta1_WordInfo
data GoogleCloudVideointelligenceV1p1beta1_WordInfo Source #
Word-specific information for recognized words. Word information is only included in the response when certain request parameters are set, such as `enable_word_time_offsets`.
See: googleCloudVideointelligenceV1p1beta1_WordInfo
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_WordInfo :: GoogleCloudVideointelligenceV1p1beta1_WordInfo Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_WordInfo
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvwicStartTime :: Lens' GoogleCloudVideointelligenceV1p1beta1_WordInfo (Maybe Scientific) Source #
Time offset relative to the beginning of the audio, and corresponding to the start of the spoken word. This field is only set if `enable_word_time_offsets=true` and only in the top hypothesis. This is an experimental feature and the accuracy of the time offset can vary.
gcvvwicConfidence :: Lens' GoogleCloudVideointelligenceV1p1beta1_WordInfo (Maybe Double) Source #
Output only. The confidence estimate between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. This field is set only for the top alternative. This field is not guaranteed to be accurate and users should not rely on it to be always provided. The default of 0.0 is a sentinel value indicating `confidence` was not set.
gcvvwicEndTime :: Lens' GoogleCloudVideointelligenceV1p1beta1_WordInfo (Maybe Scientific) Source #
Time offset relative to the beginning of the audio, and corresponding to the end of the spoken word. This field is only set if `enable_word_time_offsets=true` and only in the top hypothesis. This is an experimental feature and the accuracy of the time offset can vary.
gcvvwicWord :: Lens' GoogleCloudVideointelligenceV1p1beta1_WordInfo (Maybe Text) Source #
The word corresponding to this set of information.
gcvvwicSpeakerTag :: Lens' GoogleCloudVideointelligenceV1p1beta1_WordInfo (Maybe Int32) Source #
Output only. A distinct integer value is assigned for every speaker within the audio. This field specifies which one of those speakers was detected to have spoken this word. Value ranges from 1 up to diarization_speaker_count, and is only set if speaker diarization is enabled.
GoogleLongrunning_Operation
data GoogleLongrunning_Operation Source #
This resource represents a long-running operation that is the result of a network API call.
See: googleLongrunning_Operation
smart constructor.
Instances
googleLongrunning_Operation :: GoogleLongrunning_Operation Source #
Creates a value of GoogleLongrunning_Operation
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gloDone :: Lens' GoogleLongrunning_Operation (Maybe Bool) Source #
If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
gloError :: Lens' GoogleLongrunning_Operation (Maybe GoogleRpc_Status) Source #
The error result of the operation in case of failure or cancellation.
gloResponse :: Lens' GoogleLongrunning_Operation (Maybe GoogleLongrunning_OperationResponse) Source #
The normal response of the operation in case of success. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
gloName :: Lens' GoogleLongrunning_Operation (Maybe Text) Source #
The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should have the format of `operations/some/unique/name`.
gloMetadata :: Lens' GoogleLongrunning_Operation (Maybe GoogleLongrunning_OperationMetadata) Source #
Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
GoogleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative
data GoogleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative Source #
Alternative hypotheses (a.k.a. n-best list).
See: googleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative :: GoogleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvsracConfidence :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative (Maybe Double) Source #
The confidence estimate between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. This field is typically provided only for the top hypothesis, and only for `is_final=true` results. Clients should not rely on the `confidence` field as it is not guaranteed to be accurate or consistent. The default of 0.0 is a sentinel value indicating `confidence` was not set.
gcvvsracWords :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative [GoogleCloudVideointelligenceV1p1beta1_WordInfo] Source #
A list of word-specific information for each recognized word.
gcvvsracTranscript :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative (Maybe Text) Source #
Transcript text representing the words that the user spoke.
GoogleCloudVideointelligenceV1_ExplicitContentFrame
data GoogleCloudVideointelligenceV1_ExplicitContentFrame Source #
Video frame level annotation results for explicit content.
See: googleCloudVideointelligenceV1_ExplicitContentFrame
smart constructor.
Instances
googleCloudVideointelligenceV1_ExplicitContentFrame :: GoogleCloudVideointelligenceV1_ExplicitContentFrame Source #
Creates a value of GoogleCloudVideointelligenceV1_ExplicitContentFrame
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gTimeOffSet :: Lens' GoogleCloudVideointelligenceV1_ExplicitContentFrame (Maybe Scientific) Source #
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
gPornographyLikelihood :: Lens' GoogleCloudVideointelligenceV1_ExplicitContentFrame (Maybe GoogleCloudVideointelligenceV1_ExplicitContentFramePornographyLikelihood) Source #
Likelihood of the pornography content..
GoogleCloudVideointelligenceV1beta2_VideoSegment
data GoogleCloudVideointelligenceV1beta2_VideoSegment Source #
Video segment.
See: googleCloudVideointelligenceV1beta2_VideoSegment
smart constructor.
Instances
googleCloudVideointelligenceV1beta2_VideoSegment :: GoogleCloudVideointelligenceV1beta2_VideoSegment Source #
Creates a value of GoogleCloudVideointelligenceV1beta2_VideoSegment
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gStartTimeOffSet :: Lens' GoogleCloudVideointelligenceV1beta2_VideoSegment (Maybe Scientific) Source #
Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive).
gEndTimeOffSet :: Lens' GoogleCloudVideointelligenceV1beta2_VideoSegment (Maybe Scientific) Source #
Time-offset, relative to the beginning of the video, corresponding to the end of the segment (inclusive).
GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults
data GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults Source #
Annotation results for a single video.
See: googleCloudVideointelligenceV1p2beta1_VideoAnnotationResults
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_VideoAnnotationResults :: GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvvarsShotAnnotations :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p2beta1_VideoSegment] Source #
Shot annotations. Each shot is represented as a video segment.
gcvvvarsShotLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation] Source #
Label annotations on shot level. There is exactly one element for each unique label.
gcvvvarsInputURI :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults (Maybe Text) Source #
Video file location in Google Cloud Storage.
gcvvvarsError :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults (Maybe GoogleRpc_Status) Source #
If set, indicates an error. Note that for a single `AnnotateVideoRequest` some videos may succeed and some may fail.
gcvvvarsObjectAnnotations :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p2beta1_ObjectTrackingAnnotation] Source #
Annotations for list of objects detected and tracked in video.
gcvvvarsFrameLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation] Source #
Label annotations on frame level. There is exactly one element for each unique label.
gcvvvarsSpeechTranscriptions :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p2beta1_SpeechTranscription] Source #
Speech transcription.
gcvvvarsTextAnnotations :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p2beta1_TextAnnotation] Source #
OCR text detection and tracking. Annotations for list of detected text snippets. Each will have list of frame information associated with it.
gcvvvarsSegmentLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation] Source #
Label annotations on video level or user specified segment level. There is exactly one element for each unique label.
gcvvvarsExplicitAnnotation :: Lens' GoogleCloudVideointelligenceV1p2beta1_VideoAnnotationResults (Maybe GoogleCloudVideointelligenceV1p2beta1_ExplicitContentAnnotation) Source #
Explicit content annotation.
GoogleCloudVideointelligenceV1beta2_LabelSegment
data GoogleCloudVideointelligenceV1beta2_LabelSegment Source #
Video segment level annotation results for label detection.
See: googleCloudVideointelligenceV1beta2_LabelSegment
smart constructor.
Instances
googleCloudVideointelligenceV1beta2_LabelSegment :: GoogleCloudVideointelligenceV1beta2_LabelSegment Source #
Creates a value of GoogleCloudVideointelligenceV1beta2_LabelSegment
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvlscConfidence :: Lens' GoogleCloudVideointelligenceV1beta2_LabelSegment (Maybe Double) Source #
Confidence that the label is accurate. Range: [0, 1].
gcvvlscSegment :: Lens' GoogleCloudVideointelligenceV1beta2_LabelSegment (Maybe GoogleCloudVideointelligenceV1beta2_VideoSegment) Source #
Video segment where a label was detected.
GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingPoly
data GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingPoly Source #
Normalized bounding polygon for text (that might not be aligned with axis). Contains list of the corner points in clockwise order starting from top-left corner. For example, for a rectangular bounding box: When the text is horizontal it might look like: 0----1 | | 3----2 When it's clockwise rotated 180 degrees around the top-left corner it becomes: 2----3 | | 1----0 and the vertex order will still be (0, 1, 2, 3). Note that values can be less than 0, or greater than 1 due to trignometric calculations for location of the box.
See: googleCloudVideointelligenceV1p2beta1_NormalizedBoundingPoly
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_NormalizedBoundingPoly :: GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingPoly Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingPoly
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvnbpVertices :: Lens' GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingPoly [GoogleCloudVideointelligenceV1p2beta1_NormalizedVertex] Source #
Normalized vertices of the bounding polygon.
GoogleCloudVideointelligenceV1beta2_WordInfo
data GoogleCloudVideointelligenceV1beta2_WordInfo Source #
Word-specific information for recognized words. Word information is only included in the response when certain request parameters are set, such as `enable_word_time_offsets`.
See: googleCloudVideointelligenceV1beta2_WordInfo
smart constructor.
Instances
googleCloudVideointelligenceV1beta2_WordInfo :: GoogleCloudVideointelligenceV1beta2_WordInfo Source #
Creates a value of GoogleCloudVideointelligenceV1beta2_WordInfo
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
goooStartTime :: Lens' GoogleCloudVideointelligenceV1beta2_WordInfo (Maybe Scientific) Source #
Time offset relative to the beginning of the audio, and corresponding to the start of the spoken word. This field is only set if `enable_word_time_offsets=true` and only in the top hypothesis. This is an experimental feature and the accuracy of the time offset can vary.
goooConfidence :: Lens' GoogleCloudVideointelligenceV1beta2_WordInfo (Maybe Double) Source #
Output only. The confidence estimate between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. This field is set only for the top alternative. This field is not guaranteed to be accurate and users should not rely on it to be always provided. The default of 0.0 is a sentinel value indicating `confidence` was not set.
goooEndTime :: Lens' GoogleCloudVideointelligenceV1beta2_WordInfo (Maybe Scientific) Source #
Time offset relative to the beginning of the audio, and corresponding to the end of the spoken word. This field is only set if `enable_word_time_offsets=true` and only in the top hypothesis. This is an experimental feature and the accuracy of the time offset can vary.
goooWord :: Lens' GoogleCloudVideointelligenceV1beta2_WordInfo (Maybe Text) Source #
The word corresponding to this set of information.
goooSpeakerTag :: Lens' GoogleCloudVideointelligenceV1beta2_WordInfo (Maybe Int32) Source #
Output only. A distinct integer value is assigned for every speaker within the audio. This field specifies which one of those speakers was detected to have spoken this word. Value ranges from 1 up to diarization_speaker_count, and is only set if speaker diarization is enabled.
GoogleCloudVideointelligenceV1_ExplicitContentAnnotation
data GoogleCloudVideointelligenceV1_ExplicitContentAnnotation Source #
Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame.
See: googleCloudVideointelligenceV1_ExplicitContentAnnotation
smart constructor.
Instances
googleCloudVideointelligenceV1_ExplicitContentAnnotation :: GoogleCloudVideointelligenceV1_ExplicitContentAnnotation Source #
Creates a value of GoogleCloudVideointelligenceV1_ExplicitContentAnnotation
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gooFrames :: Lens' GoogleCloudVideointelligenceV1_ExplicitContentAnnotation [GoogleCloudVideointelligenceV1_ExplicitContentFrame] Source #
All video frames where explicit content was detected.
GoogleCloudVideointelligenceV1_AnnotateVideoResponse
data GoogleCloudVideointelligenceV1_AnnotateVideoResponse Source #
Video annotation response. Included in the `response` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service.
See: googleCloudVideointelligenceV1_AnnotateVideoResponse
smart constructor.
Instances
googleCloudVideointelligenceV1_AnnotateVideoResponse :: GoogleCloudVideointelligenceV1_AnnotateVideoResponse Source #
Creates a value of GoogleCloudVideointelligenceV1_AnnotateVideoResponse
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gooAnnotationResults :: Lens' GoogleCloudVideointelligenceV1_AnnotateVideoResponse [GoogleCloudVideointelligenceV1_VideoAnnotationResults] Source #
Annotation results for all videos specified in `AnnotateVideoRequest`.
GoogleCloudVideointelligenceV1p2beta1_NormalizedVertex
data GoogleCloudVideointelligenceV1p2beta1_NormalizedVertex Source #
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
See: googleCloudVideointelligenceV1p2beta1_NormalizedVertex
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_NormalizedVertex :: GoogleCloudVideointelligenceV1p2beta1_NormalizedVertex Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_NormalizedVertex
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvnvX :: Lens' GoogleCloudVideointelligenceV1p2beta1_NormalizedVertex (Maybe Double) Source #
X coordinate.
gcvvnvY :: Lens' GoogleCloudVideointelligenceV1p2beta1_NormalizedVertex (Maybe Double) Source #
Y coordinate.
GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation
data GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation Source #
Label annotation.
See: googleCloudVideointelligenceV1p2beta1_LabelAnnotation
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_LabelAnnotation :: GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvlacCategoryEntities :: Lens' GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation [GoogleCloudVideointelligenceV1p2beta1_Entity] Source #
Common categories for the detected entity. E.g. when the label is `Terrier` the category is likely `dog`. And in some cases there might be more than one categories e.g. `Terrier` could also be a `pet`.
gcvvlacFrames :: Lens' GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation [GoogleCloudVideointelligenceV1p2beta1_LabelFrame] Source #
All video frames where a label was detected.
gcvvlacSegments :: Lens' GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation [GoogleCloudVideointelligenceV1p2beta1_LabelSegment] Source #
All video segments where a label was detected.
gcvvlacEntity :: Lens' GoogleCloudVideointelligenceV1p2beta1_LabelAnnotation (Maybe GoogleCloudVideointelligenceV1p2beta1_Entity) Source #
Detected entity.
GoogleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative
data GoogleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative Source #
Alternative hypotheses (a.k.a. n-best list).
See: googleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative
smart constructor.
Instances
googleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative :: GoogleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative Source #
Creates a value of GoogleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvsra1Confidence :: Lens' GoogleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative (Maybe Double) Source #
The confidence estimate between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. This field is typically provided only for the top hypothesis, and only for `is_final=true` results. Clients should not rely on the `confidence` field as it is not guaranteed to be accurate or consistent. The default of 0.0 is a sentinel value indicating `confidence` was not set.
gcvvsra1Words :: Lens' GoogleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative [GoogleCloudVideointelligenceV1beta2_WordInfo] Source #
A list of word-specific information for each recognized word.
gcvvsra1Transcript :: Lens' GoogleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative (Maybe Text) Source #
Transcript text representing the words that the user spoke.
GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFramePornographyLikelihood
data GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFramePornographyLikelihood Source #
Likelihood of the pornography content..
GCVVECFPLLikelihoodUnspecified |
|
GCVVECFPLVeryUnlikely |
|
GCVVECFPLUnlikely |
|
GCVVECFPLPossible |
|
GCVVECFPLLikely |
|
GCVVECFPLVeryLikely |
|
Instances
GoogleCloudVideointelligenceV1p1beta1_VideoSegment
data GoogleCloudVideointelligenceV1p1beta1_VideoSegment Source #
Video segment.
See: googleCloudVideointelligenceV1p1beta1_VideoSegment
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_VideoSegment :: GoogleCloudVideointelligenceV1p1beta1_VideoSegment Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_VideoSegment
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gooStartTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoSegment (Maybe Scientific) Source #
Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive).
gooEndTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoSegment (Maybe Scientific) Source #
Time-offset, relative to the beginning of the video, corresponding to the end of the segment (inclusive).
GoogleCloudVideointelligenceV1p1beta1_ExplicitContentDetectionConfig
data GoogleCloudVideointelligenceV1p1beta1_ExplicitContentDetectionConfig Source #
Config for EXPLICIT_CONTENT_DETECTION.
See: googleCloudVideointelligenceV1p1beta1_ExplicitContentDetectionConfig
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_ExplicitContentDetectionConfig :: GoogleCloudVideointelligenceV1p1beta1_ExplicitContentDetectionConfig Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_ExplicitContentDetectionConfig
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvecdcModel :: Lens' GoogleCloudVideointelligenceV1p1beta1_ExplicitContentDetectionConfig (Maybe Text) Source #
Model to use for explicit content detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".
GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation
data GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation Source #
Label annotation.
See: googleCloudVideointelligenceV1p1beta1_LabelAnnotation
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_LabelAnnotation :: GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
ggCategoryEntities :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation [GoogleCloudVideointelligenceV1p1beta1_Entity] Source #
Common categories for the detected entity. E.g. when the label is `Terrier` the category is likely `dog`. And in some cases there might be more than one categories e.g. `Terrier` could also be a `pet`.
ggFrames :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation [GoogleCloudVideointelligenceV1p1beta1_LabelFrame] Source #
All video frames where a label was detected.
ggSegments :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation [GoogleCloudVideointelligenceV1p1beta1_LabelSegment] Source #
All video segments where a label was detected.
ggEntity :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation (Maybe GoogleCloudVideointelligenceV1p1beta1_Entity) Source #
Detected entity.
GoogleCloudVideointelligenceV1_LabelFrame
data GoogleCloudVideointelligenceV1_LabelFrame Source #
Video frame level annotation results for label detection.
See: googleCloudVideointelligenceV1_LabelFrame
smart constructor.
Instances
googleCloudVideointelligenceV1_LabelFrame :: GoogleCloudVideointelligenceV1_LabelFrame Source #
Creates a value of GoogleCloudVideointelligenceV1_LabelFrame
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvlf1TimeOffSet :: Lens' GoogleCloudVideointelligenceV1_LabelFrame (Maybe Scientific) Source #
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
gcvvlf1Confidence :: Lens' GoogleCloudVideointelligenceV1_LabelFrame (Maybe Double) Source #
Confidence that the label is accurate. Range: [0, 1].
GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfigLabelDetectionMode
data GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfigLabelDetectionMode Source #
What labels should be detected with LABEL_DETECTION, in addition to video-level labels or segment-level labels. If unspecified, defaults to `SHOT_MODE`.
LabelDetectionModeUnspecified |
|
ShotMode |
|
FrameMode |
|
ShotAndFrameMode |
|
Instances
GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFramePornographyLikelihood
data GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFramePornographyLikelihood Source #
Likelihood of the pornography content..
GLikelihoodUnspecified |
|
GVeryUnlikely |
|
GUnlikely |
|
GPossible |
|
GLikely |
|
GVeryLikely |
|
Instances
GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFrame
data GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFrame Source #
Video frame level annotation results for explicit content.
See: googleCloudVideointelligenceV1p2beta1_ExplicitContentFrame
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_ExplicitContentFrame :: GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFrame Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFrame
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gooTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFrame (Maybe Scientific) Source #
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
gooPornographyLikelihood :: Lens' GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFrame (Maybe GoogleCloudVideointelligenceV1p2beta1_ExplicitContentFramePornographyLikelihood) Source #
Likelihood of the pornography content..
GoogleCloudVideointelligenceV1_Entity
data GoogleCloudVideointelligenceV1_Entity Source #
Detected entity from video analysis.
See: googleCloudVideointelligenceV1_Entity
smart constructor.
Instances
googleCloudVideointelligenceV1_Entity :: GoogleCloudVideointelligenceV1_Entity Source #
Creates a value of GoogleCloudVideointelligenceV1_Entity
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvecLanguageCode :: Lens' GoogleCloudVideointelligenceV1_Entity (Maybe Text) Source #
Language code for `description` in BCP-47 format.
gcvvecEntityId :: Lens' GoogleCloudVideointelligenceV1_Entity (Maybe Text) Source #
Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
gcvvecDescription :: Lens' GoogleCloudVideointelligenceV1_Entity (Maybe Text) Source #
Textual description, e.g. `Fixed-gear bicycle`.
GoogleCloudVideointelligenceV1beta2_VideoAnnotationProgress
data GoogleCloudVideointelligenceV1beta2_VideoAnnotationProgress Source #
Annotation progress for a single video.
See: googleCloudVideointelligenceV1beta2_VideoAnnotationProgress
smart constructor.
Instances
googleCloudVideointelligenceV1beta2_VideoAnnotationProgress :: GoogleCloudVideointelligenceV1beta2_VideoAnnotationProgress Source #
Creates a value of GoogleCloudVideointelligenceV1beta2_VideoAnnotationProgress
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvvapsStartTime :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationProgress (Maybe UTCTime) Source #
Time when the request was received.
gcvvvapsInputURI :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationProgress (Maybe Text) Source #
Video file location in Google Cloud Storage.
gcvvvapsProgressPercent :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationProgress (Maybe Int32) Source #
Approximate percentage processed thus far. Guaranteed to be 100 when fully processed.
gcvvvapsUpdateTime :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationProgress (Maybe UTCTime) Source #
Time of the most recent update.
GoogleCloudVideointelligenceV1beta2_SpeechTranscription
data GoogleCloudVideointelligenceV1beta2_SpeechTranscription Source #
A speech recognition result corresponding to a portion of the audio.
See: googleCloudVideointelligenceV1beta2_SpeechTranscription
smart constructor.
Instances
googleCloudVideointelligenceV1beta2_SpeechTranscription :: GoogleCloudVideointelligenceV1beta2_SpeechTranscription Source #
Creates a value of GoogleCloudVideointelligenceV1beta2_SpeechTranscription
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvstcAlternatives :: Lens' GoogleCloudVideointelligenceV1beta2_SpeechTranscription [GoogleCloudVideointelligenceV1beta2_SpeechRecognitionAlternative] Source #
May contain one or more recognition hypotheses (up to the maximum specified in `max_alternatives`). These alternatives are ordered in terms of accuracy, with the top (first) alternative being the most probable, as ranked by the recognizer.
gcvvstcLanguageCode :: Lens' GoogleCloudVideointelligenceV1beta2_SpeechTranscription (Maybe Text) Source #
Output only. The BCP-47 language tag of the language in this result. This language code was detected to have the most likelihood of being spoken in the audio.
GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults
data GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults Source #
Annotation results for a single video.
See: googleCloudVideointelligenceV1p1beta1_VideoAnnotationResults
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_VideoAnnotationResults :: GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gooShotAnnotations :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p1beta1_VideoSegment] Source #
Shot annotations. Each shot is represented as a video segment.
gooShotLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation] Source #
Label annotations on shot level. There is exactly one element for each unique label.
gooInputURI :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults (Maybe Text) Source #
Video file location in Google Cloud Storage.
gooError :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults (Maybe GoogleRpc_Status) Source #
If set, indicates an error. Note that for a single `AnnotateVideoRequest` some videos may succeed and some may fail.
gooFrameLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation] Source #
Label annotations on frame level. There is exactly one element for each unique label.
gooSpeechTranscriptions :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p1beta1_SpeechTranscription] Source #
Speech transcription.
gooSegmentLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults [GoogleCloudVideointelligenceV1p1beta1_LabelAnnotation] Source #
Label annotations on video level or user specified segment level. There is exactly one element for each unique label.
gooExplicitAnnotation :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults (Maybe GoogleCloudVideointelligenceV1p1beta1_ExplicitContentAnnotation) Source #
Explicit content annotation.
Xgafv
V1 error format.
Instances
GoogleCloudVideointelligenceV1_AnnotateVideoProgress
data GoogleCloudVideointelligenceV1_AnnotateVideoProgress Source #
Video annotation progress. Included in the `metadata` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service.
See: googleCloudVideointelligenceV1_AnnotateVideoProgress
smart constructor.
Instances
googleCloudVideointelligenceV1_AnnotateVideoProgress :: GoogleCloudVideointelligenceV1_AnnotateVideoProgress Source #
Creates a value of GoogleCloudVideointelligenceV1_AnnotateVideoProgress
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gooAnnotationProgress :: Lens' GoogleCloudVideointelligenceV1_AnnotateVideoProgress [GoogleCloudVideointelligenceV1_VideoAnnotationProgress] Source #
Progress metadata for all videos specified in `AnnotateVideoRequest`.
GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig
data GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig Source #
Config for SPEECH_TRANSCRIPTION.
See: googleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig :: GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvstccSpeechContexts :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig [GoogleCloudVideointelligenceV1p1beta1_SpeechContext] Source #
- Optional* A means to provide context to assist the speech recognition.
gcvvstccLanguageCode :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig (Maybe Text) Source #
- Required* The language of the supplied audio as a BCP-47 language tag. Example: "en-US". See Language Support for a list of the currently supported language codes.
gcvvstccAudioTracks :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig [Int32] Source #
- Optional* For file formats, such as MXF or MKV, supporting multiple audio tracks, specify up to two tracks. Default: track 0.
gcvvstccEnableAutomaticPunctuation :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig (Maybe Bool) Source #
- Optional* If 'true', adds punctuation to recognition result hypotheses. This feature is only available in select languages. Setting this for requests in other languages has no effect at all. The default 'false' value does not add punctuation to result hypotheses. NOTE: "This is currently offered as an experimental service, complimentary to all users. In the future this may be exclusively available as a premium feature."
gcvvstccMaxAlternatives :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig (Maybe Int32) Source #
- Optional* Maximum number of recognition hypotheses to be returned. Specifically, the maximum number of `SpeechRecognitionAlternative` messages within each `SpeechTranscription`. The server may return fewer than `max_alternatives`. Valid values are `0`-`30`. A value of `0` or `1` will return a maximum of one. If omitted, will return a maximum of one.
gcvvstccEnableSpeakerDiarization :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig (Maybe Bool) Source #
- Optional* If 'true', enables speaker detection for each recognized word in the top alternative of the recognition result using a speaker_tag provided in the WordInfo. Note: When this is true, we send all the words from the beginning of the audio for the top alternative in every consecutive responses. This is done in order to improve our speaker tags as our models learn to identify the speakers in the conversation over time.
gcvvstccFilterProfanity :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig (Maybe Bool) Source #
- Optional* If set to `true`, the server will attempt to filter out profanities, replacing all but the initial character in each filtered word with asterisks, e.g. "f***". If set to `false` or omitted, profanities won't be filtered out.
gcvvstccDiarizationSpeakerCount :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig (Maybe Int32) Source #
- Optional* If set, specifies the estimated number of speakers in the conversation. If not set, defaults to '2'. Ignored unless enable_speaker_diarization is set to true.
gcvvstccEnableWordConfidence :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscriptionConfig (Maybe Bool) Source #
- Optional* If `true`, the top result includes a list of words and the confidence for those words. If `false`, no word-level confidence information is returned. The default is `false`.
GoogleCloudVideointelligenceV1beta2_ExplicitContentFramePornographyLikelihood
data GoogleCloudVideointelligenceV1beta2_ExplicitContentFramePornographyLikelihood Source #
Likelihood of the pornography content..
GOOLikelihoodUnspecified |
|
GOOVeryUnlikely |
|
GOOUnlikely |
|
GOOPossible |
|
GOOLikely |
|
GOOVeryLikely |
|
Instances
GoogleLongrunning_OperationResponse
data GoogleLongrunning_OperationResponse Source #
The normal response of the operation in case of success. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
See: googleLongrunning_OperationResponse
smart constructor.
Instances
googleLongrunning_OperationResponse Source #
Creates a value of GoogleLongrunning_OperationResponse
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
glorAddtional :: Lens' GoogleLongrunning_OperationResponse (HashMap Text JSONValue) Source #
Properties of the object. Contains field 'type with type URL.
GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress
data GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress Source #
Annotation progress for a single video.
See: googleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress :: GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvvapcStartTime :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress (Maybe UTCTime) Source #
Time when the request was received.
gcvvvapcInputURI :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress (Maybe Text) Source #
Video file location in Google Cloud Storage.
gcvvvapcProgressPercent :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress (Maybe Int32) Source #
Approximate percentage processed thus far. Guaranteed to be 100 when fully processed.
gcvvvapcUpdateTime :: Lens' GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationProgress (Maybe UTCTime) Source #
Time of the most recent update.
GoogleCloudVideointelligenceV1p2beta1_TextFrame
data GoogleCloudVideointelligenceV1p2beta1_TextFrame Source #
Video frame level annotation results for text annotation (OCR). Contains information regarding timestamp and bounding box locations for the frames containing detected OCR text snippets.
See: googleCloudVideointelligenceV1p2beta1_TextFrame
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_TextFrame :: GoogleCloudVideointelligenceV1p2beta1_TextFrame Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_TextFrame
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvtfRotatedBoundingBox :: Lens' GoogleCloudVideointelligenceV1p2beta1_TextFrame (Maybe GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingPoly) Source #
Bounding polygon of the detected text for this frame.
gcvvtfTimeOffSet :: Lens' GoogleCloudVideointelligenceV1p2beta1_TextFrame (Maybe Scientific) Source #
Timestamp of this frame.
GoogleCloudVideointelligenceV1beta2_LabelAnnotation
data GoogleCloudVideointelligenceV1beta2_LabelAnnotation Source #
Label annotation.
See: googleCloudVideointelligenceV1beta2_LabelAnnotation
smart constructor.
Instances
googleCloudVideointelligenceV1beta2_LabelAnnotation :: GoogleCloudVideointelligenceV1beta2_LabelAnnotation Source #
Creates a value of GoogleCloudVideointelligenceV1beta2_LabelAnnotation
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
goooCategoryEntities :: Lens' GoogleCloudVideointelligenceV1beta2_LabelAnnotation [GoogleCloudVideointelligenceV1beta2_Entity] Source #
Common categories for the detected entity. E.g. when the label is `Terrier` the category is likely `dog`. And in some cases there might be more than one categories e.g. `Terrier` could also be a `pet`.
goooFrames :: Lens' GoogleCloudVideointelligenceV1beta2_LabelAnnotation [GoogleCloudVideointelligenceV1beta2_LabelFrame] Source #
All video frames where a label was detected.
goooSegments :: Lens' GoogleCloudVideointelligenceV1beta2_LabelAnnotation [GoogleCloudVideointelligenceV1beta2_LabelSegment] Source #
All video segments where a label was detected.
goooEntity :: Lens' GoogleCloudVideointelligenceV1beta2_LabelAnnotation (Maybe GoogleCloudVideointelligenceV1beta2_Entity) Source #
Detected entity.
GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfig
data GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfig Source #
Config for LABEL_DETECTION.
See: googleCloudVideointelligenceV1p1beta1_LabelDetectionConfig
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_LabelDetectionConfig :: GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfig Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfig
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvldcLabelDetectionMode :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfig (Maybe GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfigLabelDetectionMode) Source #
What labels should be detected with LABEL_DETECTION, in addition to video-level labels or segment-level labels. If unspecified, defaults to `SHOT_MODE`.
gcvvldcStationaryCamera :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfig (Maybe Bool) Source #
Whether the video has been shot from a stationary (i.e. non-moving) camera. When set to true, might improve detection accuracy for moving objects. Should be used with `SHOT_AND_FRAME_MODE` enabled.
gcvvldcModel :: Lens' GoogleCloudVideointelligenceV1p1beta1_LabelDetectionConfig (Maybe Text) Source #
Model to use for label detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".
GoogleCloudVideointelligenceV1p1beta1_SpeechTranscription
data GoogleCloudVideointelligenceV1p1beta1_SpeechTranscription Source #
A speech recognition result corresponding to a portion of the audio.
See: googleCloudVideointelligenceV1p1beta1_SpeechTranscription
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_SpeechTranscription :: GoogleCloudVideointelligenceV1p1beta1_SpeechTranscription Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_SpeechTranscription
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
ggAlternatives :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscription [GoogleCloudVideointelligenceV1p1beta1_SpeechRecognitionAlternative] Source #
May contain one or more recognition hypotheses (up to the maximum specified in `max_alternatives`). These alternatives are ordered in terms of accuracy, with the top (first) alternative being the most probable, as ranked by the recognizer.
ggLanguageCode :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechTranscription (Maybe Text) Source #
Output only. The BCP-47 language tag of the language in this result. This language code was detected to have the most likelihood of being spoken in the audio.
GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults
data GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults Source #
Annotation results for a single video.
See: googleCloudVideointelligenceV1beta2_VideoAnnotationResults
smart constructor.
Instances
googleCloudVideointelligenceV1beta2_VideoAnnotationResults :: GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults Source #
Creates a value of GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvvarcShotAnnotations :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults [GoogleCloudVideointelligenceV1beta2_VideoSegment] Source #
Shot annotations. Each shot is represented as a video segment.
gcvvvarcShotLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults [GoogleCloudVideointelligenceV1beta2_LabelAnnotation] Source #
Label annotations on shot level. There is exactly one element for each unique label.
gcvvvarcInputURI :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults (Maybe Text) Source #
Video file location in Google Cloud Storage.
gcvvvarcError :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults (Maybe GoogleRpc_Status) Source #
If set, indicates an error. Note that for a single `AnnotateVideoRequest` some videos may succeed and some may fail.
gcvvvarcFrameLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults [GoogleCloudVideointelligenceV1beta2_LabelAnnotation] Source #
Label annotations on frame level. There is exactly one element for each unique label.
gcvvvarcSpeechTranscriptions :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults [GoogleCloudVideointelligenceV1beta2_SpeechTranscription] Source #
Speech transcription.
gcvvvarcSegmentLabelAnnotations :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults [GoogleCloudVideointelligenceV1beta2_LabelAnnotation] Source #
Label annotations on video level or user specified segment level. There is exactly one element for each unique label.
gcvvvarcExplicitAnnotation :: Lens' GoogleCloudVideointelligenceV1beta2_VideoAnnotationResults (Maybe GoogleCloudVideointelligenceV1beta2_ExplicitContentAnnotation) Source #
Explicit content annotation.
GoogleCloudVideointelligenceV1p2beta1_LabelSegment
data GoogleCloudVideointelligenceV1p2beta1_LabelSegment Source #
Video segment level annotation results for label detection.
See: googleCloudVideointelligenceV1p2beta1_LabelSegment
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_LabelSegment :: GoogleCloudVideointelligenceV1p2beta1_LabelSegment Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_LabelSegment
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvls1Confidence :: Lens' GoogleCloudVideointelligenceV1p2beta1_LabelSegment (Maybe Double) Source #
Confidence that the label is accurate. Range: [0, 1].
gcvvls1Segment :: Lens' GoogleCloudVideointelligenceV1p2beta1_LabelSegment (Maybe GoogleCloudVideointelligenceV1p2beta1_VideoSegment) Source #
Video segment where a label was detected.
GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox
data GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox Source #
Normalized bounding box. The normalized vertex coordinates are relative to the original image. Range: [0, 1].
See: googleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox :: GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvnbbBottom :: Lens' GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox (Maybe Double) Source #
Bottom Y coordinate.
gcvvnbbLeft :: Lens' GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox (Maybe Double) Source #
Left X coordinate.
gcvvnbbRight :: Lens' GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox (Maybe Double) Source #
Right X coordinate.
gcvvnbbTop :: Lens' GoogleCloudVideointelligenceV1p2beta1_NormalizedBoundingBox (Maybe Double) Source #
Top Y coordinate.
GoogleCloudVideointelligenceV1p2beta1_TextSegment
data GoogleCloudVideointelligenceV1p2beta1_TextSegment Source #
Video segment level annotation results for text detection.
See: googleCloudVideointelligenceV1p2beta1_TextSegment
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_TextSegment :: GoogleCloudVideointelligenceV1p2beta1_TextSegment Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_TextSegment
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvtsFrames :: Lens' GoogleCloudVideointelligenceV1p2beta1_TextSegment [GoogleCloudVideointelligenceV1p2beta1_TextFrame] Source #
Information related to the frames where OCR detected text appears.
gcvvtsConfidence :: Lens' GoogleCloudVideointelligenceV1p2beta1_TextSegment (Maybe Double) Source #
Confidence for the track of detected text. It is calculated as the highest over all frames where OCR detected text appears.
gcvvtsSegment :: Lens' GoogleCloudVideointelligenceV1p2beta1_TextSegment (Maybe GoogleCloudVideointelligenceV1p2beta1_VideoSegment) Source #
Video segment where a text snippet was detected.
GoogleCloudVideointelligenceV1p2beta1_SpeechTranscription
data GoogleCloudVideointelligenceV1p2beta1_SpeechTranscription Source #
A speech recognition result corresponding to a portion of the audio.
See: googleCloudVideointelligenceV1p2beta1_SpeechTranscription
smart constructor.
Instances
googleCloudVideointelligenceV1p2beta1_SpeechTranscription :: GoogleCloudVideointelligenceV1p2beta1_SpeechTranscription Source #
Creates a value of GoogleCloudVideointelligenceV1p2beta1_SpeechTranscription
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
goooAlternatives :: Lens' GoogleCloudVideointelligenceV1p2beta1_SpeechTranscription [GoogleCloudVideointelligenceV1p2beta1_SpeechRecognitionAlternative] Source #
May contain one or more recognition hypotheses (up to the maximum specified in `max_alternatives`). These alternatives are ordered in terms of accuracy, with the top (first) alternative being the most probable, as ranked by the recognizer.
goooLanguageCode :: Lens' GoogleCloudVideointelligenceV1p2beta1_SpeechTranscription (Maybe Text) Source #
Output only. The BCP-47 language tag of the language in this result. This language code was detected to have the most likelihood of being spoken in the audio.
GoogleRpc_Status
data GoogleRpc_Status Source #
The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. The error model is designed to be: - Simple to use and understand for most users - Flexible enough to meet unexpected needs # Overview The `Status` message contains three pieces of data: error code, error message, and error details. The error code should be an enum value of google.rpc.Code, but it may accept additional error codes if needed. The error message should be a developer-facing English message that helps developers *understand* and *resolve* the error. If a localized user-facing error message is needed, put the localized message in the error details or localize it in the client. The optional error details may contain arbitrary information about the error. There is a predefined set of error detail types in the package `google.rpc` that can be used for common error conditions. # Language mapping The `Status` message is the logical representation of the error model, but it is not necessarily the actual wire format. When the `Status` message is exposed in different client libraries and different wire protocols, it can be mapped differently. For example, it will likely be mapped to some exceptions in Java, but more likely mapped to some error codes in C. # Other uses The error model and the `Status` message can be used in a variety of environments, either with or without APIs, to provide a consistent developer experience across different environments. Example uses of this error model include: - Partial errors. If a service needs to return partial errors to the client, it may embed the `Status` in the normal response to indicate the partial errors. - Workflow errors. A typical workflow has multiple steps. Each step may have a `Status` message for error reporting. - Batch operations. If a client uses batch request and batch response, the `Status` message should be used directly inside batch response, one for each error sub-response. - Asynchronous operations. If an API call embeds asynchronous operation results in its response, the status of those operations should be represented directly using the `Status` message. - Logging. If some API errors are stored in logs, the message `Status` could be used directly after any stripping needed for security/privacy reasons.
See: googleRpc_Status
smart constructor.
Instances
googleRpc_Status :: GoogleRpc_Status Source #
Creates a value of GoogleRpc_Status
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
grsDetails :: Lens' GoogleRpc_Status [GoogleRpc_StatusDetailsItem] Source #
A list of messages that carry the error details. There is a common set of message types for APIs to use.
grsCode :: Lens' GoogleRpc_Status (Maybe Int32) Source #
The status code, which should be an enum value of google.rpc.Code.
grsMessage :: Lens' GoogleRpc_Status (Maybe Text) Source #
A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
GoogleCloudVideointelligenceV1_VideoSegment
data GoogleCloudVideointelligenceV1_VideoSegment Source #
Video segment.
See: googleCloudVideointelligenceV1_VideoSegment
smart constructor.
Instances
googleCloudVideointelligenceV1_VideoSegment :: GoogleCloudVideointelligenceV1_VideoSegment Source #
Creates a value of GoogleCloudVideointelligenceV1_VideoSegment
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvvscStartTimeOffSet :: Lens' GoogleCloudVideointelligenceV1_VideoSegment (Maybe Scientific) Source #
Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive).
gcvvvscEndTimeOffSet :: Lens' GoogleCloudVideointelligenceV1_VideoSegment (Maybe Scientific) Source #
Time-offset, relative to the beginning of the video, corresponding to the end of the segment (inclusive).
GoogleCloudVideointelligenceV1p1beta1_ExplicitContentAnnotation
data GoogleCloudVideointelligenceV1p1beta1_ExplicitContentAnnotation Source #
Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame.
See: googleCloudVideointelligenceV1p1beta1_ExplicitContentAnnotation
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_ExplicitContentAnnotation :: GoogleCloudVideointelligenceV1p1beta1_ExplicitContentAnnotation Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_ExplicitContentAnnotation
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvecacFrames :: Lens' GoogleCloudVideointelligenceV1p1beta1_ExplicitContentAnnotation [GoogleCloudVideointelligenceV1p1beta1_ExplicitContentFrame] Source #
All video frames where explicit content was detected.
GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoResponse
data GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoResponse Source #
Video annotation response. Included in the `response` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service.
See: googleCloudVideointelligenceV1p1beta1_AnnotateVideoResponse
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_AnnotateVideoResponse :: GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoResponse Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoResponse
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvavrcAnnotationResults :: Lens' GoogleCloudVideointelligenceV1p1beta1_AnnotateVideoResponse [GoogleCloudVideointelligenceV1p1beta1_VideoAnnotationResults] Source #
Annotation results for all videos specified in `AnnotateVideoRequest`.
GoogleCloudVideointelligenceV1beta2_ExplicitContentFrame
data GoogleCloudVideointelligenceV1beta2_ExplicitContentFrame Source #
Video frame level annotation results for explicit content.
See: googleCloudVideointelligenceV1beta2_ExplicitContentFrame
smart constructor.
Instances
googleCloudVideointelligenceV1beta2_ExplicitContentFrame :: GoogleCloudVideointelligenceV1beta2_ExplicitContentFrame Source #
Creates a value of GoogleCloudVideointelligenceV1beta2_ExplicitContentFrame
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvecfcTimeOffSet :: Lens' GoogleCloudVideointelligenceV1beta2_ExplicitContentFrame (Maybe Scientific) Source #
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
gcvvecfcPornographyLikelihood :: Lens' GoogleCloudVideointelligenceV1beta2_ExplicitContentFrame (Maybe GoogleCloudVideointelligenceV1beta2_ExplicitContentFramePornographyLikelihood) Source #
Likelihood of the pornography content..
GoogleCloudVideointelligenceV1p1beta1_SpeechContext
data GoogleCloudVideointelligenceV1p1beta1_SpeechContext Source #
Provides "hints" to the speech recognizer to favor specific words and phrases in the results.
See: googleCloudVideointelligenceV1p1beta1_SpeechContext
smart constructor.
Instances
googleCloudVideointelligenceV1p1beta1_SpeechContext :: GoogleCloudVideointelligenceV1p1beta1_SpeechContext Source #
Creates a value of GoogleCloudVideointelligenceV1p1beta1_SpeechContext
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
gcvvscPhrases :: Lens' GoogleCloudVideointelligenceV1p1beta1_SpeechContext [Text] Source #
- Optional* A list of strings containing words and phrases "hints" so that the speech recognition is more likely to recognize them. This can be used to improve the accuracy for specific words and phrases, for example, if specific commands are typically spoken by the user. This can also be used to add additional words to the vocabulary of the recognizer. See usage limits.
GoogleCloudVideointelligenceV1_LabelSegment
data GoogleCloudVideointelligenceV1_LabelSegment Source #
Video segment level annotation results for label detection.
See: googleCloudVideointelligenceV1_LabelSegment
smart constructor.
Instances
googleCloudVideointelligenceV1_LabelSegment :: GoogleCloudVideointelligenceV1_LabelSegment Source #
Creates a value of GoogleCloudVideointelligenceV1_LabelSegment
with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
g2Confidence :: Lens' GoogleCloudVideointelligenceV1_LabelSegment (Maybe Double) Source #
Confidence that the label is accurate. Range: [0, 1].
g2Segment :: Lens' GoogleCloudVideointelligenceV1_LabelSegment (Maybe GoogleCloudVideointelligenceV1_VideoSegment) Source #
Video segment where a label was detected.