amazonka-rekognition-1.4.5: Amazon Rekognition SDK.

Copyright(c) 2013-2016 Brendan Hay
LicenseMozilla Public License, v. 2.0.
MaintainerBrendan Hay <brendan.g.hay@gmail.com>
Stabilityauto-generated
Portabilitynon-portable (GHC extensions)
Safe HaskellNone
LanguageHaskell2010

Network.AWS.Rekognition.DetectLabels

Contents

Description

Detects instances of real-world labels within an image (JPEG or PNG) provided as input. This includes objects like flower, tree, and table; events like wedding, graduation, and birthday party; and concepts like landscape, evening, and nature. For an example, see 'get-started-exercise-detect-labels' .

For each object, scene, and concept the API returns one or more labels. Each label provides the object name, and the level of confidence that the image contains the object. For example, suppose the input image has a lighthouse, the sea, and a rock. The response will include all three labels, one for each object.

{Name: lighthouse, Confidence: 98.4629}
{Name: rock,Confidence: 79.2097}
{Name: sea,Confidence: 75.061}

In the preceding example, the operation returns one label for each of the three objects. The operation can also return multiple labels for the same object in the image. For example, if the input image shows a flower (for example, a tulip), the operation might return the following three labels.

{Name: flower,Confidence: 99.0562}
{Name: plant,Confidence: 99.0562}
{Name: tulip,Confidence: 99.0562}

In this example, the detection algorithm more precisely identifies the flower as a tulip.

You can provide the input image as an S3 object or as base64-encoded bytes. In response, the API returns an array of labels. In addition, the response also includes the orientation correction. Optionally, you can specify MinConfidence to control the confidence threshold for the labels returned. The default is 50%. You can also add the MaxLabels parameter to limit the number of labels returned.

This is a stateless API operation. That is, the operation does not persist any data.

This operation requires permissions to perform the rekognition:DetectLabels action.

Synopsis

Creating a Request

detectLabels Source #

Arguments

:: Image

dlImage

-> DetectLabels 

Creates a value of DetectLabels with the minimum fields required to make a request.

Use one of the following lenses to modify other fields as desired:

  • dlMinConfidence - Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't return any labels with confidence lower than this specified value. If minConfidence is not specified, the operation returns labels with a confidence values greater than or equal to 50 percent.
  • dlMaxLabels - Maximum number of labels you want the service to return in the response. The service returns the specified number of highest confidence labels.
  • dlImage - The input image. You can provide a blob of image bytes or an S3 object.

data DetectLabels Source #

See: detectLabels smart constructor.

Instances

Eq DetectLabels Source # 
Data DetectLabels Source # 

Methods

gfoldl :: (forall d b. Data d => c (d -> b) -> d -> c b) -> (forall g. g -> c g) -> DetectLabels -> c DetectLabels #

gunfold :: (forall b r. Data b => c (b -> r) -> c r) -> (forall r. r -> c r) -> Constr -> c DetectLabels #

toConstr :: DetectLabels -> Constr #

dataTypeOf :: DetectLabels -> DataType #

dataCast1 :: Typeable (* -> *) t => (forall d. Data d => c (t d)) -> Maybe (c DetectLabels) #

dataCast2 :: Typeable (* -> * -> *) t => (forall d e. (Data d, Data e) => c (t d e)) -> Maybe (c DetectLabels) #

gmapT :: (forall b. Data b => b -> b) -> DetectLabels -> DetectLabels #

gmapQl :: (r -> r' -> r) -> r -> (forall d. Data d => d -> r') -> DetectLabels -> r #

gmapQr :: (r' -> r -> r) -> r -> (forall d. Data d => d -> r') -> DetectLabels -> r #

gmapQ :: (forall d. Data d => d -> u) -> DetectLabels -> [u] #

gmapQi :: Int -> (forall d. Data d => d -> u) -> DetectLabels -> u #

gmapM :: Monad m => (forall d. Data d => d -> m d) -> DetectLabels -> m DetectLabels #

gmapMp :: MonadPlus m => (forall d. Data d => d -> m d) -> DetectLabels -> m DetectLabels #

gmapMo :: MonadPlus m => (forall d. Data d => d -> m d) -> DetectLabels -> m DetectLabels #

Read DetectLabels Source # 
Show DetectLabels Source # 
Generic DetectLabels Source # 

Associated Types

type Rep DetectLabels :: * -> * #

Hashable DetectLabels Source # 
ToJSON DetectLabels Source # 
NFData DetectLabels Source # 

Methods

rnf :: DetectLabels -> () #

AWSRequest DetectLabels Source # 
ToPath DetectLabels Source # 
ToHeaders DetectLabels Source # 
ToQuery DetectLabels Source # 
type Rep DetectLabels Source # 
type Rep DetectLabels = D1 (MetaData "DetectLabels" "Network.AWS.Rekognition.DetectLabels" "amazonka-rekognition-1.4.5-JLr9ZFBNFwFGjCyEdV0gv" False) (C1 (MetaCons "DetectLabels'" PrefixI True) ((:*:) (S1 (MetaSel (Just Symbol "_dlMinConfidence") NoSourceUnpackedness SourceStrict DecidedStrict) (Rec0 (Maybe Double))) ((:*:) (S1 (MetaSel (Just Symbol "_dlMaxLabels") NoSourceUnpackedness SourceStrict DecidedStrict) (Rec0 (Maybe Nat))) (S1 (MetaSel (Just Symbol "_dlImage") NoSourceUnpackedness SourceStrict DecidedStrict) (Rec0 Image)))))
type Rs DetectLabels Source # 

Request Lenses

dlMinConfidence :: Lens' DetectLabels (Maybe Double) Source #

Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't return any labels with confidence lower than this specified value. If minConfidence is not specified, the operation returns labels with a confidence values greater than or equal to 50 percent.

dlMaxLabels :: Lens' DetectLabels (Maybe Natural) Source #

Maximum number of labels you want the service to return in the response. The service returns the specified number of highest confidence labels.

dlImage :: Lens' DetectLabels Image Source #

The input image. You can provide a blob of image bytes or an S3 object.

Destructuring the Response

detectLabelsResponse Source #

Creates a value of DetectLabelsResponse with the minimum fields required to make a request.

Use one of the following lenses to modify other fields as desired:

  • dlrsLabels - An array of labels for the real-world objects detected.
  • dlrsOrientationCorrection - Amazon Rekognition returns the orientation of the input image that was detected (clockwise direction). If your application displays the image, you can use this value to correct the orientation. If Rekognition detects that the input image was rotated (for example, by 90 degrees), it first corrects the orientation before detecting the labels.
  • dlrsResponseStatus - -- | The response status code.

data DetectLabelsResponse Source #

See: detectLabelsResponse smart constructor.

Instances

Eq DetectLabelsResponse Source # 
Data DetectLabelsResponse Source # 

Methods

gfoldl :: (forall d b. Data d => c (d -> b) -> d -> c b) -> (forall g. g -> c g) -> DetectLabelsResponse -> c DetectLabelsResponse #

gunfold :: (forall b r. Data b => c (b -> r) -> c r) -> (forall r. r -> c r) -> Constr -> c DetectLabelsResponse #

toConstr :: DetectLabelsResponse -> Constr #

dataTypeOf :: DetectLabelsResponse -> DataType #

dataCast1 :: Typeable (* -> *) t => (forall d. Data d => c (t d)) -> Maybe (c DetectLabelsResponse) #

dataCast2 :: Typeable (* -> * -> *) t => (forall d e. (Data d, Data e) => c (t d e)) -> Maybe (c DetectLabelsResponse) #

gmapT :: (forall b. Data b => b -> b) -> DetectLabelsResponse -> DetectLabelsResponse #

gmapQl :: (r -> r' -> r) -> r -> (forall d. Data d => d -> r') -> DetectLabelsResponse -> r #

gmapQr :: (r' -> r -> r) -> r -> (forall d. Data d => d -> r') -> DetectLabelsResponse -> r #

gmapQ :: (forall d. Data d => d -> u) -> DetectLabelsResponse -> [u] #

gmapQi :: Int -> (forall d. Data d => d -> u) -> DetectLabelsResponse -> u #

gmapM :: Monad m => (forall d. Data d => d -> m d) -> DetectLabelsResponse -> m DetectLabelsResponse #

gmapMp :: MonadPlus m => (forall d. Data d => d -> m d) -> DetectLabelsResponse -> m DetectLabelsResponse #

gmapMo :: MonadPlus m => (forall d. Data d => d -> m d) -> DetectLabelsResponse -> m DetectLabelsResponse #

Read DetectLabelsResponse Source # 
Show DetectLabelsResponse Source # 
Generic DetectLabelsResponse Source # 
NFData DetectLabelsResponse Source # 

Methods

rnf :: DetectLabelsResponse -> () #

type Rep DetectLabelsResponse Source # 
type Rep DetectLabelsResponse = D1 (MetaData "DetectLabelsResponse" "Network.AWS.Rekognition.DetectLabels" "amazonka-rekognition-1.4.5-JLr9ZFBNFwFGjCyEdV0gv" False) (C1 (MetaCons "DetectLabelsResponse'" PrefixI True) ((:*:) (S1 (MetaSel (Just Symbol "_dlrsLabels") NoSourceUnpackedness SourceStrict DecidedStrict) (Rec0 (Maybe [Label]))) ((:*:) (S1 (MetaSel (Just Symbol "_dlrsOrientationCorrection") NoSourceUnpackedness SourceStrict DecidedStrict) (Rec0 (Maybe OrientationCorrection))) (S1 (MetaSel (Just Symbol "_dlrsResponseStatus") NoSourceUnpackedness SourceStrict DecidedUnpack) (Rec0 Int)))))

Response Lenses

dlrsLabels :: Lens' DetectLabelsResponse [Label] Source #

An array of labels for the real-world objects detected.

dlrsOrientationCorrection :: Lens' DetectLabelsResponse (Maybe OrientationCorrection) Source #

Amazon Rekognition returns the orientation of the input image that was detected (clockwise direction). If your application displays the image, you can use this value to correct the orientation. If Rekognition detects that the input image was rotated (for example, by 90 degrees), it first corrects the orientation before detecting the labels.

dlrsResponseStatus :: Lens' DetectLabelsResponse Int Source #

  • - | The response status code.