Storyline Profanity Detecting

Oct 29, 2018

Just something I was working on that I thought I'd share in case it's useful to anyone.

Background

A project I worked on a while ago involved creating eLearning material that included a digital assignment. The interaction allowed the learner to enter their answers to a selection of questions in the browser. My Interaction then compiled their answers into an assignment, which was downloaded as a MS Word doc, as well as being saved to a gsheet and emailed to their tutor for marking.

After several months using the eLearning successfully, one of the tutors came to me and suggested an improvement that intrigued me. He explained that a common problem he encountered was leaners using bad/rude language when completing their assignments, which meant the assignments needed to be completed again, leading to wasted time. He requested I looked into a way by which we could politely let the learner know the language they were using was inappropriate, before the assignment was compiled and submitted - giving them the opportunity to change it there and then.

Solution

The solution seemed straightforward to me. I needed to take the variable storing the learner's answer out of Storyline, process it in using JavaScript - checking for profanity, then push back a result to Storyline, triggering either the ability to progress to the next question or a message box to caution the learner on their use of bad language.

JavaScript

This is what I came up with:

var player = GetPlayer();
var str = player.GetVar("ac1_1"); //learners anawer in var
//Get bad words list from webserver
function textFileToArray( filename )
{
var reader = (window.XMLHttpRequest != null )
? new XMLHttpRequest()
: new ActiveXObject("Microsoft.XMLHTTP");
reader.open("GET", filename, false );
reader.send( );
return reader.responseText.split("/");
}
//text file name
var terms = textFileToArray("badwords.txt"); //load search terms from txt
var profanitypresent = "no"; //default filter to allow progress
terms.forEach(function myFunction(term) { //for each search term...
var result = str.toLowerCase().includes(term); //is search term present in string
if (result == true) {
profanitypresent = "yes"; //if search term is present then set profanity to yes
}
});
if (profanitypresent == "yes") {
var d2 = new Date();
var n2 = d2.getTime();
player.SetVar("rudemessage",n2); //if profanity found then set rudemessage to current time to allow warning pop-up
} else {
var d = new Date();
var n = d.getTime();
player.SetVar("rudecheck",n); //if no profanity then set rude check to current time to allow progression
}

My comments in the code go some way to explain what is going on. But, essentially I'm pulling a list of "bad words" from a txt file into an array. Then for each word in that array I'm searching in the learner's answer. If it is found in their answer I’m setting my variable; profanitypresent to "yes".

Once I'm finished searching through the learner's answer for "bad words" I'm setting a variable in Storyline to the current date and time. Either rudemessage to let Storyline know that bad words were found and a warning needs to be displayed. Or setting rudecheck to let Storyline know that the check has completed and no bad words were found, allowing progression within the course assignment.

Storyline

Within Storyline I needed to create variables to be set using the JavaScript. On the master slide I also assigned some triggers.

  • My next button runs the script.
  • I set a trigger so that when the variable rudemessage changes the warning layer is displayed showing the learner my warning message (and a GIF of Nathan Fillion giving a disapproving look - just 'cause).
  • I set a trigger so that when the variable rudecheck changes the player advances to the next slide.

Because of my decision to advance slide or not advance slide is based on the changing of variables I needed to ensure that my variable would always change to a different number, hence using a date time stamp, wich will always be unique.

Concerns

I am concerned with the potential to censor free expression; however, I understand that foul language isn't useful, and can be time wasting in the completion of these assessments and the assosiated qualifications.

I have left it up to the trainers to populate the "bad words" text file (achieved through a combination of a Microsoft Form and a Microsoft Flow). However, I am concerned that some words could be unduly identified as foul language. For example, is a learner was to write about a male chicken, a cock. In my particular case the course in question was designed for construction workers, so that is unlikely!

Future Developments

Looking to grow this for future projects, I can see this being used for opposite purposes. By which I mean being used to search in a learner's response for words that should be included. For example, it could be used it to provide some instant feedback to a leaner based on how many of the key points they have written about in their answers. Before the assignment is officially marked by a tutor.

Downloads & Links

Click here to test the demo (test with one of the following words: damn or poop)

Click here for the Storyline download

2 Replies