muschellij2 badge:
Ari is an R package designed to help you make videos from plain text files. Ari uses Amazon Polly to convert your text into speech. You can then provide images or a set of HTML slides which Ari will narrate based on a script you provide. Ari uses FFmpeg to stitch together audio and images.
You can also install the development version of Ari from GitHub with:
You also need to make sure you have FFmpeg version 3.2.4 or higher installed on your system.
First create an HTML slide presentation based in R Markdown using a package like rmarkdown
or xaringan
. To see an example presentation enter browseURL(ari_example("ari_intro.html"))
into the R console. For every slide in the package you should write some text that will be read while the slide is being shown. You can do this in a separate Markdown file (see file.show(ari_example("ari_intro_script.md"))
) or you can use HTML comments to put narration right in your .Rmd
file (see file.show(ari_example("ari_comments.Rmd"))
). Make sure to knit your .Rmd
file into the HTML slides you want to be turned into a video.
Once you have finished your script and slides install the aws.polly
package. You can find a guide for quickly setting up R to use Amazon Web Services here. Run aws.polly::list_voices()
to make sure your keys are working (this function should return a data frame). Once you’ve set up your access keys you can start using Ari.
These examples make use of the ari_example()
function. In order to view the files mentioned here you should use file.show(ari_example("[file name]"))
. You can watch an example of a video produced by Ari here.
library(ari)
# First set up your AWS keys
Sys.setenv("AWS_ACCESS_KEY_ID" = "EA6TDV7ASDE9TL2WI6RJ",
"AWS_SECRET_ACCESS_KEY" = "OSnwITbMzcAwvHfYDEmk10khb3g82j04Wj8Va4AA",
"AWS_DEFAULT_REGION" = "us-east-2")
# Create a video from a Markdown file and slides
ari_narrate(
ari_example("ari_intro_script.md"),
ari_example("ari_intro.html"),
voice = "Joey")
# Create a video from an R Markdown file with comments and slides
ari_narrate(
ari_example("ari_comments.Rmd"),
ari_example("ari_intro.html"),
voice = "Kendra")
# Create a video from images and strings
ari_spin(
ari_example(c("mab1.png", "mab2.png")),
c("This is a graph.", "This is another graph"),
voice = "Joanna")
# Create a video from images and Waves
library(tuneR)
ari_stitch(
ari_example(c("mab1.png", "mab2.png")),
list(noise(), noise()))
Some html slides take a bit to render on webshot, and can be dark gray instead of white. If you change the delay
argument in ari_narrate
, passed to webshot
, this can resolve some issues, but may take a bit longer to run. Also, using capture_method = "vectorized"
is faster, but may have some issues, so run with capture_method = "iterative"
if this is the case as so:
ari_narrate(
ari_example("ari_comments.Rmd"),
ari_example("ari_intro.html"),
voice = "Kendra",
delay = 0.5,
capture_method = "iterative")
Creating videos from plain text has some significant advantages:
At the Johns Hopkins Data Science Lab we rapidly develop highly technical content about the latest libraries and technologies available to data scientists. Video production requires a significant time investment and APIs are always changing. If the interface to a software library changes it’s particularly arduous to re-record an entire lecture because some function arguments changed. By using Ari we hope to be able to rapidly create and update video content.