Given equal length vectors of paths to images (preferably .jpg
s
or .png
s) and strings which will be
synthesized by
Amazon Polly or
any other synthesizer available in
tts
, this function creates an
.mp4
video file where each image is shown with
its corresponding narration. This function uses ari_stitch
to
create the video.
ari_spin( images, paragraphs, output = tempfile(fileext = ".mp4"), voice = text2speech::tts_default_voice(service = service), service = ifelse(have_polly(), "amazon", "google"), subtitles = FALSE, duration = NULL, tts_args = NULL, ... ) have_polly()
images | A vector of paths to images. |
---|---|
paragraphs | A vector strings that will be spoken by Amazon Polly. |
output | A path to the video file which will be created. |
voice | The voice you want to use. See
|
service | speech synthesis service to use,
passed to |
subtitles | Should a |
duration | a vector of numeric durations for each audio
track. See |
tts_args | list of arguments to pass to |
... | additional arguments to |
The output from ari_stitch
This function needs to connect to
Amazon Web Services in order to create the
narration. You can find a guide for accessing AWS from R
here.
For more information about how R connects
to Amazon Polly see the aws.polly
documentation
here.
if (FALSE) { slides <- system.file("test", c("mab2.png", "mab1.png"), package = "ari") sentences <- c("Welome to my very interesting lecture.", "Here are some fantastic equations I came up with.") ari_spin(slides, sentences, voice = "Joey") }