50 Replies
Rutwin Geuverink

Hi all,

I forgot to mention that this will only work with Chrome and it's just a POC I did about a year ago.

It started when I noticed that I got into a habit of telling (Asian) ESL students to make use of Google's voice search in Chrome/Android-phone to autonomously check their pronunciation of words in plural form (since most Asians have difficulty pronouncing the ending "s") 

Since Google's speech recognition was actually much better than I initially expected, I got the idea to look for a way to integrate this into a storyline project.

If anyone interested I can make an easy how-to and post it here as well.

Cheers

Rutwin

Cheng Li

Rutwin Geuverink said:

Hi all,

I forgot to mention that this will only work with Chrome and it's just a POC I did about a year ago. I've attached the project file so you can take a look under-the-hood, and if anyone interested I can make an easy how-to and post it here as well. 

Cheers

Rutwin



Thanks for sharing this great resource!!! You are awesome Rutwin!

Rutwin Geuverink

@Nick - Yep, that is indeed (part) of the way it is done. Luckily this can be found quite easily compared with a year ago. 

You will need to add this code to the index.html (or php) of your site's template. So in my case, I just added the code into a blank template which will make it easier to use in a webobject.

see the blank template here: http://rutwin.com/storyline-voice-recognition/index.php

Just right-click and view the source to see the actual code I have used:

As you can see I have added the form in the body and the script in the head of the template.

The issue with the original code from Google developers was that "transcribe(this.value)" didn't populate the LocalStorage until the user hits the enter key.

By using "voiceInputOver(this.value)" the transcribed voice input will automatically be added to the LocalStorage. (this takes care of the 1st of 2 additional and unwanted user inputs)

  • To take away the last of the additional user inputs I like to find a (non-intrusive) way for Storyline to check for a change in the stored value of the LocalStorage! That way the users only need to click once to complete the whole process of transcribing and evaluating the voice input.

Cheers,

Rutwin

Steve Flowers

Here's another JS library for voice recognition:

https://www.talater.com/annyang/

Experimented a bit. It's cool, but browser support is really limited. The idea that you can use the browser to directly control actions is attractive. In my experiment, I set it up to "Show me the next screen" and "Show me the previous screen" with voice navigation. Neat. If *only* browser support were better:(

Rutwin Geuverink

Cool find Steve!

I also like the browser support vs global usage link http://caniuse.com/#feat=stream

Can you think of a way how to setup "show me next/previous screen" while using Google API? (I mean in combination with SL?)

Edit: this JS is only great when you want to implement speech controlled accessibility features....however...not suitable when you want to use these values in Storyline

Rutwin Geuverink

@Nick,

Thanks for your comment, it feels good someone appreciate the effort I put in this.  

In this latest concept I didn't use any Javascript in SL itself because I only wanted to trigger a variable when the user finished speaking....

(initially I wanted to time it, or put triggers around the webobject and "guess" when a user should have finished speaking.....but ofcourse this wasn't really viable)

I asked the following question in another thread: http://community.articulate.com/forums/t/40285.aspx

"I would like to have a JavaScript triggered at the same time the user clicks on a web object.....(The user needs to click on a button displayed by the web object)

The following reply from Steve Flowers got me going (although very cryptic for a non programmer like me), and at least gave me the assurance that what I wanted was possible.

Steve Flowers said:

Hi Rutwin - 

What do you want to have happen as a result of clicking on the button in the Web object? If you want something to happen in Storyline, you could use a relative reference to access the player API and update a variable in Storyline when the button in the Web object is accessed.

For example, on the button in your web object, trigger a JavaScript function something like this:

var player=parent.GetPlayer();

player.SetVar("StorylineVar","someValue");

Something like that would let Storyline know that the object within the Web object has been triggered.

I just started to mess around with the original code I already had made a year ago...and...after many failures, cups of coffee, and hours later......I eventually got it all working. (and since my JavaScript skills are next to nothing....I'm pretty proud that my persistence, to make it work the way I wanted, actually paid off)

I hope you're happy that I share here the complete&working index file of my site's template (the one the webobject links to)  

Kind Regards,

Rutwin

Rutwin Geuverink

@Nick...believe me I'm a non-programmer

I just finished sending you a private message when I realized that maybe the only reason why it's not working for you is that you are viewing the SL output locally, instead of on a web-server!

The SL variable won't update when published/viewed locally!

Rutwin Geuverink

Great! I'm happy that you got it working and are as enthusiastic about the new possibilities as I am!

To round-off this Speech Recognition topic I'd like to share some information I came across that some might like ( or just Nick : )

http://www.clt-net.com/icwe/corporatelanguagetraining/technology/intellispeech.htm

http://www.speexx.com/onlinedemo/english/

The people behind the speech technology used on these websites all worked at some point for Nuance (which developed Siri for Apple, and seem to put a real effort in becoming the leader in this $billion-industry)

and then this......

At the beginning of this thread I wrote about how my quest for voice integration started......

Rutwin Geuverink said:

 

It started when I noticed that I got into a habit of telling (Asian) ESL students to make use of Google's voice search in Chrome/Android-phone to autonomously check their pronunciation of words


...well....I'm just a teacher with no millions to invest.......unlike the former VP of Nuance who had the same idea and started this incredible website........

http://blog.englishcentral.com/2011/04/01/the-origins-of-englishcentral-alan-scwartz-founder-ceo/

Marvelous!! "It actually works" :) - (mixed feelings...I feel like a small ant....)

Danny Simms

Hi Rutwin and All,

I have been following this thread and I concur with all previous comments; what you have achieved is brilliant. I have recently upgraded to Studio 13 rather than Storyline and was wondering if the voice recognition will work in the same fashion.

My coding is very limited but I can certainly follow instructions. Once again, brilliant work.

Kind regards

Danny

Rutwin Geuverink

Thanks for your kind words, Danny!

With regard to your question, it is unfortunately not possible to integrate voice recognition in Studio '13.

The transcribed voice data has to be stored in a variable - a feature only included in Storyline.

For a complete comparison between Studio and Storyline click here  

Maria Jonsson

Hi

I have followed this thread with interest. I am about to build a language course in Storyline where I would like to use speech recognition in order for the user to check his/her pronunciation. BUT it must be possible to run on Internet Explorer since that is my customers only choice of web browser.  Has anyone found a way to solve that? 

Best regards

Maria

Alex O'Byrne

I was thinking about this thread earlier and wondering if anyone could think of a way to use this technique or a similar for a slightly different application. I was thinking about having an intro video, where the person has already input their name (or it pulls it from LMS) and then it gets sent to a text to speech website and gets played in between 2 voice clips so for what I was thinking:

step 1: person inputs name  or gets pulled through from LMS(slide 1)

step 2: pre recorded "hi there" plays

step 3: TTS website plays the persons name (macro/javascript the info across?)

step 4: the rest of the pre-recorded introduction plays

So from the end users point of view all that happens is an intro video/audio plays and says their name (in a different voice, but hey can't be 100% right).

I am not tech savvy enough to do this, but looks like the type of thing the people involved in this thread could get done?

Food for thought anyway!