Ability for users to record and playback sound during a quiz

Sep 23, 2012

Hi there,

I have been thinking a lot about the implementation of the following 2 ideas; I hope I am not the only one that has these wishes. It would be a game changer

1. To be able to use Google's Voice Search Engine in storyline! A user has to fill in the blank by speaking the word. Google's Voice Search is pretty good in recognizing words. I have found an Android application which used the Search Engine to display the spoken words as text. -this was a speech-to-sms program.

2. The ability for a user, learner, to listen first to a sound created in the course..and then the option to record his/her own voice so that the user can compare the pronunciation. -I have the Cambridge Advanced Learner's dictionary which has this option.

Can anyone tell me about the possibilities, or share some thoughts about this?

Online Language learning often goes without practicing the pronunciation...or when it does, then this website just only revolves completely around this skill.



37 Replies
Katie Riggio

Hi there, Russell!

Storyline doesn't have a feature that allows learners to record their voice at this time, but I'll share your note with the right folks.

In the meantime, I've seen the community use other tools to achieve this. Check out these related discussions:

Ashraf Li

This code works very well in the storyline. The issue just that should put a particular time to record in "await sleep(5000);" calculated by milliseconds in the code below. You can just put it directly inside a trigger using (Execute JavaScript) anywhere, then it will play by itself after finishing recording.

There are other ways to save recording but I tried about one year ago and no way could work. I hope it can work in new versions.

You can put it in Command, shape... or even in the player tab.

It could be used for practicing speaking for students.

Hope it can help.

Here's the code:

const recordAudio = () =>
new Promise(async resolve => {
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });

const mediaRecorder = new MediaRecorder(stream);

const audioChunks = [];

mediaRecorder.addEventListener("dataavailable", event => {


const start = () => mediaRecorder.start();

const stop = () =>
new Promise(resolve => {
mediaRecorder.addEventListener("stop", () => {

const audioBlob = new Blob(audioChunks);

const audioUrl = URL.createObjectURL(audioBlob);

const audio = new Audio(audioUrl);
const play = () => audio.play();

resolve({ audioBlob, audioUrl, play });




resolve({ start, stop });


const sleep = time => new Promise(resolve => setTimeout(resolve, time));

(async () => {
const recorder = await recordAudio();


await sleep(5000);

const audio = await recorder.stop();



Jose Tansengco

Hello Debraj,

We still don't have any new updates to share regarding this feature request, but we'll be sure to let everyone subscribed to this thread know when this feature gets added. Here's a quick look at how we manage feature requests.

In the mean time, you can check out my colleague Katie's response here where she shared some community posts that contained workarounds provided by members of the community.