amazonka-polly-1.6.1: Amazon Polly SDK.

Copyright(c) 2013-2018 Brendan Hay
LicenseMozilla Public License, v. 2.0.
MaintainerBrendan Hay <brendan.g.hay+amazonka@gmail.com>
Stabilityauto-generated
Portabilitynon-portable (GHC extensions)
Safe HaskellNone
LanguageHaskell2010

Network.AWS.Polly.SynthesizeSpeech

Contents

Description

Synthesizes UTF-8 input, plain text or SSML, to a stream of bytes. SSML input must be valid, well-formed SSML. Some alphabets might not be available with all the voices (for example, Cyrillic might not be read at all by English voices) unless phoneme mapping is used. For more information, see How it Works .

Synopsis

Creating a Request

synthesizeSpeech Source #

Creates a value of SynthesizeSpeech with the minimum fields required to make a request.

Use one of the following lenses to modify other fields as desired:

  • ssSpeechMarkTypes - The type of speech marks returned for the input text.
  • ssSampleRate - The audio frequency specified in Hz. The valid values for mp3 and ogg_vorbis are "8000", "16000", and "22050". The default value is "22050". Valid values for pcm are "8000" and "16000" The default value is "16000".
  • ssTextType - Specifies whether the input text is plain text or SSML. The default value is plain text. For more information, see Using SSML .
  • ssLexiconNames - List of one or more pronunciation lexicon names you want the service to apply during synthesis. Lexicons are applied only if the language of the lexicon is the same as the language of the voice. For information about storing lexicons, see PutLexicon .
  • ssOutputFormat - The format in which the returned output will be encoded. For audio stream, this will be mp3, ogg_vorbis, or pcm. For speech marks, this will be json.
  • ssText - Input text to synthesize. If you specify ssml as the TextType , follow the SSML format for the input text.
  • ssVoiceId - Voice ID to use for the synthesis. You can get a list of available voice IDs by calling the DescribeVoices operation.

data SynthesizeSpeech Source #

See: synthesizeSpeech smart constructor.

Instances
Eq SynthesizeSpeech Source # 
Instance details

Defined in Network.AWS.Polly.SynthesizeSpeech

Data SynthesizeSpeech Source # 
Instance details

Defined in Network.AWS.Polly.SynthesizeSpeech

Methods

gfoldl :: (forall d b. Data d => c (d -> b) -> d -> c b) -> (forall g. g -> c g) -> SynthesizeSpeech -> c SynthesizeSpeech #

gunfold :: (forall b r. Data b => c (b -> r) -> c r) -> (forall r. r -> c r) -> Constr -> c SynthesizeSpeech #

toConstr :: SynthesizeSpeech -> Constr #

dataTypeOf :: SynthesizeSpeech -> DataType #

dataCast1 :: Typeable t => (forall d. Data d => c (t d)) -> Maybe (c SynthesizeSpeech) #

dataCast2 :: Typeable t => (forall d e. (Data d, Data e) => c (t d e)) -> Maybe (c SynthesizeSpeech) #

gmapT :: (forall b. Data b => b -> b) -> SynthesizeSpeech -> SynthesizeSpeech #

gmapQl :: (r -> r' -> r) -> r -> (forall d. Data d => d -> r') -> SynthesizeSpeech -> r #

gmapQr :: (r' -> r -> r) -> r -> (forall d. Data d => d -> r') -> SynthesizeSpeech -> r #

gmapQ :: (forall d. Data d => d -> u) -> SynthesizeSpeech -> [u] #

gmapQi :: Int -> (forall d. Data d => d -> u) -> SynthesizeSpeech -> u #

gmapM :: Monad m => (forall d. Data d => d -> m d) -> SynthesizeSpeech -> m SynthesizeSpeech #

gmapMp :: MonadPlus m => (forall d. Data d => d -> m d) -> SynthesizeSpeech -> m SynthesizeSpeech #

gmapMo :: MonadPlus m => (forall d. Data d => d -> m d) -> SynthesizeSpeech -> m SynthesizeSpeech #

Show SynthesizeSpeech Source # 
Instance details

Defined in Network.AWS.Polly.SynthesizeSpeech

Generic SynthesizeSpeech Source # 
Instance details

Defined in Network.AWS.Polly.SynthesizeSpeech

Associated Types

type Rep SynthesizeSpeech :: Type -> Type #

Hashable SynthesizeSpeech Source # 
Instance details

Defined in Network.AWS.Polly.SynthesizeSpeech

ToJSON SynthesizeSpeech Source # 
Instance details

Defined in Network.AWS.Polly.SynthesizeSpeech

AWSRequest SynthesizeSpeech Source # 
Instance details

Defined in Network.AWS.Polly.SynthesizeSpeech

Associated Types

type Rs SynthesizeSpeech :: Type #

ToHeaders SynthesizeSpeech Source # 
Instance details

Defined in Network.AWS.Polly.SynthesizeSpeech

ToPath SynthesizeSpeech Source # 
Instance details

Defined in Network.AWS.Polly.SynthesizeSpeech

ToQuery SynthesizeSpeech Source # 
Instance details

Defined in Network.AWS.Polly.SynthesizeSpeech

NFData SynthesizeSpeech Source # 
Instance details

Defined in Network.AWS.Polly.SynthesizeSpeech

Methods

rnf :: SynthesizeSpeech -> () #

type Rep SynthesizeSpeech Source # 
Instance details

Defined in Network.AWS.Polly.SynthesizeSpeech

type Rs SynthesizeSpeech Source # 
Instance details

Defined in Network.AWS.Polly.SynthesizeSpeech

Request Lenses

ssSpeechMarkTypes :: Lens' SynthesizeSpeech [SpeechMarkType] Source #

The type of speech marks returned for the input text.

ssSampleRate :: Lens' SynthesizeSpeech (Maybe Text) Source #

The audio frequency specified in Hz. The valid values for mp3 and ogg_vorbis are "8000", "16000", and "22050". The default value is "22050". Valid values for pcm are "8000" and "16000" The default value is "16000".

ssTextType :: Lens' SynthesizeSpeech (Maybe TextType) Source #

Specifies whether the input text is plain text or SSML. The default value is plain text. For more information, see Using SSML .

ssLexiconNames :: Lens' SynthesizeSpeech [Text] Source #

List of one or more pronunciation lexicon names you want the service to apply during synthesis. Lexicons are applied only if the language of the lexicon is the same as the language of the voice. For information about storing lexicons, see PutLexicon .

ssOutputFormat :: Lens' SynthesizeSpeech OutputFormat Source #

The format in which the returned output will be encoded. For audio stream, this will be mp3, ogg_vorbis, or pcm. For speech marks, this will be json.

ssText :: Lens' SynthesizeSpeech Text Source #

Input text to synthesize. If you specify ssml as the TextType , follow the SSML format for the input text.

ssVoiceId :: Lens' SynthesizeSpeech VoiceId Source #

Voice ID to use for the synthesis. You can get a list of available voice IDs by calling the DescribeVoices operation.

Destructuring the Response

synthesizeSpeechResponse Source #

Creates a value of SynthesizeSpeechResponse with the minimum fields required to make a request.

Use one of the following lenses to modify other fields as desired:

  • ssrsRequestCharacters - Number of characters synthesized.
  • ssrsContentType - Specifies the type audio stream. This should reflect the OutputFormat parameter in your request. * If you request mp3 as the OutputFormat , the ContentType returned is audiompeg. * If you request ogg_vorbis as the OutputFormat , the ContentType returned is audioogg. * If you request pcm as the OutputFormat , the ContentType returned is audiopcm in a signed 16-bit, 1 channel (mono), little-endian format. * If you request json as the OutputFormat , the ContentType returned is audiojson.
  • ssrsResponseStatus - -- | The response status code.
  • ssrsAudioStream - Stream containing the synthesized speech.

Response Lenses

ssrsRequestCharacters :: Lens' SynthesizeSpeechResponse (Maybe Int) Source #

Number of characters synthesized.

ssrsContentType :: Lens' SynthesizeSpeechResponse (Maybe Text) Source #

Specifies the type audio stream. This should reflect the OutputFormat parameter in your request. * If you request mp3 as the OutputFormat , the ContentType returned is audiompeg. * If you request ogg_vorbis as the OutputFormat , the ContentType returned is audioogg. * If you request pcm as the OutputFormat , the ContentType returned is audiopcm in a signed 16-bit, 1 channel (mono), little-endian format. * If you request json as the OutputFormat , the ContentType returned is audiojson.

ssrsAudioStream :: Lens' SynthesizeSpeechResponse RsBody Source #

Stream containing the synthesized speech.