Given equal length vectors of paths to images (preferably .jpgs or .pngs) and strings which will be synthesized by Amazon Polly or any other synthesizer available in tts, this function creates an .mp4 video file where each image is shown with its corresponding narration. This function uses ari_stitch to create the video.

ari_spin(
  images,
  paragraphs,
  output = tempfile(fileext = ".mp4"),
  voice = text2speech::tts_default_voice(service = service),
  service = ifelse(have_polly(), "amazon", "google"),
  subtitles = FALSE,
  duration = NULL,
  tts_args = NULL,
  ...
)

have_polly()

Arguments

images

A vector of paths to images.

paragraphs

A vector strings that will be spoken by Amazon Polly.

output

A path to the video file which will be created.

voice

The voice you want to use. See tts_voices for more information about what voices are available.

service

speech synthesis service to use, passed to tts, Either "amazon", "microsoft", or "google".

subtitles

Should a .srt file be created with subtitles? The default value is FALSE. If TRUE then a file with the same name as the output argument will be created, but with the file extension .srt.

duration

a vector of numeric durations for each audio track. See pad_wav

tts_args

list of arguments to pass to tts

...

additional arguments to ari_stitch

Value

The output from ari_stitch

Details

This function needs to connect to Amazon Web Services in order to create the narration. You can find a guide for accessing AWS from R here. For more information about how R connects to Amazon Polly see the aws.polly documentation here.

Examples

if (FALSE) { slides <- system.file("test", c("mab2.png", "mab1.png"), package = "ari") sentences <- c("Welome to my very interesting lecture.", "Here are some fantastic equations I came up with.") ari_spin(slides, sentences, voice = "Joey") }