Instructional Design
401 TopicsDrawing Annotation on Storyline Slide
Demo: https://360.articulate.com/review/content/518383b2-1161-408d-b9f5-adb9a6f57a11/review Inspired by code discussed on: https://img.ly/blog/how-to-draw-on-an-image-with-javascript/ Using Charts in your Storyline | Articulate - Community About This is a brief example of using an HTML canvas element to enable annotation on a Storyline slide. The example displays an image on a slide, sets a few variables, and the accessibility tags of a some slide objects. It then triggers a JavaScript trigger to create a canvas over the image, and then watches the mouse buttons and movement to allow drawing on the canvas. How it works A canvas element is created, filled with the specified base image, and inserted below a small rectangle (canvasAnchor) that is the same width as and placed directly above the image. Another rectangle (canvasClickArea) overlays and is sized to match the image. This is the area that allows drawing on the canvas (where the mouse is watched). Brush width and color can be controlled. The drawing can be cleared. It also resizes with the slide. To improve The events to watch the mouse and clear button should be better handled to allow removal when a new canvas is created. A mechanism to allow a blank base (clear) should be reinstated. Right now it just relies on the use of the initial image from the slide. Both options have their uses. Since the canvas is a raster image, resizing to very small and then very large results in poor image quality. The image can be extracted from the canvas. This could be saved or printed. More drawing options are available with the canvas element. Credit: X-ray images from https://learningradiology.com/220Views7likes8CommentsSeeking Experienced Instructional Designer/SME for Education Projects (Contract Work)
Over a decade ago, I connected with two amazing contractors on this platform, and now I'm seeking another professional to join my team on a contract basis. My main instructional designer is transitioning to a full-time role, and I need someone who can collaborate with me on creating scripts and content for a variety of education-specific subjects. This is contract work, and projects will be assigned as needed. Ideal candidates will have a background in instructional design, research, or subject matter expertise in education. You’ll work closely with me to ensure the content is engaging, accurate, and meets the high standards required for educational development. If you are interested, please reach out with your portfolio or samples of previous work. I’m looking forward to hearing from you! Erika93Views2likes5CommentsDesigning eLearning Automation with Agentic AI - A Demo Using OpenAI Realtime + Storyline360
Hello Heroes! I’m thrilled to share my latest YouTube video that begins to scratch the surface of what agentic AI controls can deliver to the Articulate Storyline user experience! 🚀 In this video, I demonstrate how generative AI, powered by OpenAI’s Realtime API, can be used to actively shape, guide, and adapt eLearning experiences in real time. 🌟 What You’ll See 🌟 Agentic AI in Action: Discover how AI can guide learners, answer their questions, and foster curiosity during eLearning modules. Function Call Demonstration: Watch AI dynamically trigger actions in Storyline 360, like changing slides based on user input, to enhance interactivity. Real-Time Audio Streaming: Explore how audio data is processed and streamed almost instantaneously, creating seamless learner-AI interactions. 🌟 Why It Matters 🌟 Customised Learning Sessions: See how session-based instructions allow AI to personalise responses, aligning them with specific learning goals. Data-Driven Insights: Learn how AI-driven function calls can unlock rich learning analytics, making adaptive learning a reality. A Vision for the Future: Imagine self-healing courses that evolve based on learner behaviour and feedback. 📢 What’s Next? Integrating real-time audio transcription for dynamic closed captions. Expanding session data capabilities to unlock even greater personalisation and interactivity. Generative AI is reshaping how we design and deliver eLearning, and this is just the beginning. Join me on this journey by watching the video here: YouTube Link. Don’t forget to subscribe to the Discover eLearning channel for more insights and demos to elevate your training projects! Let me know your thoughts or questions below—I’d love to hear how you think agentic AI development will have an impact on learning experience design! 😊 --- My name's Chris Hodgson, an eLearning developer and software trainer based in the UK. I enjoy creating fun, unique, and engaging online experiences using Articulate software! Connect with me on LinkedIn - https://www.linkedin.com/in/chrishodgson44/31Views1like0CommentsSharing a Three.js Example – Fireworks
Someone recently asked a question about manipulating the 360-image player in Storyline. That got me looking into Three.js, which Storyline uses to power its player. I’ve started looking into what other effects could be easily integrated into Storyline projects using Three. It’s all very new to me, but I do like a challenge. I chose a visually appealing example as a test case for integration into a Storyline project. It’s a simple fireworks demo created by Michael Schlachter (https://github.com/manthrax/atos/tree/master) but could have some potential in e-learning projects as a success visual. This would be akin to the many confetti-related posts in the Articulate forums over the years. It's a basic demo, but I thought I’d share the results with anyone who might want to use it. I’ve attached a sample project that demonstrates its use. Demo: https://360.articulate.com/review/content/74a1f4c7-467a-4a44-8136-27fdf249ab15/review This application uses a script on the master to display some fireworks graphics. It loads a customized Three.js library with a few addons included. There are also some sound effect samples. The Three package and the sound clips are loaded via the web object in Scene 2. They’re included in the attached zip file. I made a few modifications to Michael’s original demo to make it work within the Storyline slides. I disabled rotational display functionality, but you could always add it back in. I left the ability to zoom in and out with the mouse wheel. The UnrealBloom postprocessing effects in Three.js give the atmospheric glow to the fireworks. It’s also a known issue that they do not directly support canvas transparency. There are some workarounds for various versions of Three but getting them to work with a very limited understanding of the library is not very straightforward. I did manage to get it working in this demo, so you are able to overlay the fireworks onto other slide objects, which makes them much more useful. My implementation inserts a canvas element into either the main slide background or over the background of a shape object, like a rectangle. You can specify where it should attach using the targetTag variable, as shown in the demo. This is a quick demo, so there are limited comments and bits of sloppy code. There are plenty of areas for expansion and improvement. It includes the core Three library (v170) and only the addons listed in the source files on the Git page. You should be able to use the demo as a template and adapt it for your own uses. Update: Still can't seem to attach zip files reliably, so here is the link to the web object folder. Link: https://paedagogus.org/ELH/fw/WO.zip40Views1like1CommentLooking for a Solution Developer
Hi Heroes! We're on the lookout for a talented Solution Developer with 1-4 years of experience as a Learning Designer and Articulate 360 developer. If you're skilled in creating impactful e-learning experiences and have a strong foundation in Articulate 360, we’d love to hear from you! If you're interested in this opportunity, please leave a comment below with your details. Looking forward to connecting with passionate e-learning professionals! Thank you!69Views1like1CommentScoring User Drawn Images in Storyline
Huh, my whole previous post just vanished. Trying again... This is a follow-up to a previous discussion post on drawing annotations on a Storyline slide. In a previous post, I demonstrated an approach allowing the user to draw on an image to indicate some kind of response. It utilized a canvas element and intercepted mouse-clicks and movements to draw paths. The next step, as pointed out by Math Notermans, was to score the user’s input in some way. There are several JavaScript libraries available that perform image comparisons, usually returning some kind of quantified pixel difference as a result. Resemble.js is one such option. It returns differences as a percentage of pixels compared to the entire image size. The question is then, how to turn this into usable score? Demo: https://360.articulate.com/review/content/d96de9cf-2fd1-45a5-a41a-4a35bf5a1735/review In this example, I made a few improvements to the annotation script that was posted previously. Most notably, I added a simple undo option that records and recreates user drawings. This also allows for the user’s drawing to maintain its sharpness after resize events, instead of the previous approach of straight scaling. I also changed it to allow drawing on a touch screen (limited testing). I included a loader for Resemble.js, and some code connected to the Check button to evaluate what the user has drawn. While this example is really just meant to demonstrate the process and help you visualize the results, the idea could easily be applied to some kind of complex user interaction that is not better served by more traditional point-and-click or drag-and-drop selections. As this demo shows, it could be particularly well-suited for having users determine the proper pathway for something in a more free-response fashion, as opposed to just selecting things from a list, or dropping shapes. After drawing a response to the prompt, clicking Check will generate a score. The score is based on the comparison of the user’s response to predetermined keys, which are images that you include when building the interaction. I used two keys here, one for the ‘correct’ answer, and one for a ‘close’ answer. You can set it up to include more key options if you need more complexity. Since all we get from Resemble is a difference score, we need to convert that into a similarity score. To do that, I followed these steps. Copy the key images to individual canvases. Create a blank canvas for comparisons. Convert these and the user drawing canvas to blobs to send to Resemble. Compare the user drawing to the blank (transparent) canvas get its base difference. Compare each of the keys in the same way to get their base difference scores. These, along with the visualized differences, are shown on the top three inset images. Then, compare each key with the user drawing to get the compared differences. The comparison order needs to be consistent here. These are shown on the lower two inset images. Calculate the similarity scores (this will be slightly different between scenarios, so you need to customize it to create the score ranges you expect. The similarity is essentially a score that ranges from 0 to 1, with 1 being the most similar. When creating your keys, you need to note what brush sizes and colors you are using. Those should be specified to the user, or preset for best results. Resemble has some comparison options, but you want to make the user’s expected response as similar to the key as you can. For the ‘Correct’ answer: The similarity is just: 1 - (compared difference) / (user base difference + key base difference) To properly range this from 0 to 1, we make also make some adjustments. We cap the (user + key) sum at 100%, and then set the Similarity floor to 0. We also divide this result by an adjustment factor. This factor is essentially the best uncorrected score you could achieve by drawing the result on the slide. Here, I could not really get much over 85%, so we normalize this to become 100%. Next, we do an adjustment that weighs the total area of the ‘Correct’ key to the total area drawn by the user. If the user draws a lot more or less than the correct answer actually contains, we do not want the result to be unduly affected. This eliminates much of the influence caused by scribbling rough answers across the general correct location. Before, scribbling could often increase the final score. This fixed that. Our adjustment is to multiply our current similarity score by: (the lesser of the user or key drawing base differences) / (the square of the greater of the base differences) We use the square in the denominator to ensure that drawing too much or too little will rapidly decrease the overall similarity score. We again cap this final adjusted similarity score at 1, ensuring a working range of 0 to 1. For the ‘Close’ answer: The idea is similar, but may need adjustment. If your close answer is similar in size to the correct answer, then the same procedure will work. In our case, I used a region around the correct answer to give partial credit. This region is roughly 2 times the size of the correct answer. As a result, we only expect a reasonable answer to cover about 50% of the close answer at best, so our minimum compared difference should be about half of the key base difference value. To compensate, we add an additional adjustment factor for the ratio between ‘close’ and ‘correct’ answers (here 2). We set our other adjustment factor like we did before, with the highest achievable uncorrected score (which unsurprisingly is about 0.4 now instead of 0.85). The Final Score is just the greater of the similarity scores times a weighting factor (1 for ‘correct’, 0.8 for ‘close’), converted to a percentage. To improve To make this more useful, you would probably want to load it on the master slide, and toggle triggers from your other slides to make comparisons. Rearrange the code to only call for the processing of the keys and blank canvas once per slide, or only after resizing, instead of each time Check is clicked to save some overhead. Probably should actively remove the previous canvas elements and event handlers when replaced. This uses a bunch of callback functions while processing blobs and comparing images, which requires several interval timers to know when each step is complete before starting the next. Might be able to do it better using promises, or restructuring the code a bit. I think Resemble just works on files, blobs, and dataURIs (e.g., base64 encoded images). Haven’t checked if it can work directly from elements or src links, but I don’t think so. Probably should load Resemble from static code to ensure functionality. Could also load key images from files instead of slide objects. That might be easier for users to locate and view however. There are other library options for comparing images. Some may be faster or more suited to your needs. If they produce a difference score, then the same approach should mostly apply. Fix the sliding aspect of the slide on mobile when drawing with touch.26Views1like0CommentsIssue with Execute JavaScript Trigger in Storyline 360
I am working on a project in Articulate Storyline 360 and I'm trying to execute a JavaScript function when the timeline starts on a slide. The function is supposed to replace the default values of project text variables with specific messages based on the slide number. Here is the JavaScript code I'm using: function setTexts(slideNumber) { console.log("setTexts function called with slideNumber: " + slideNumber); // Log the function call var correctTexts = [ "This is a credit card statement with the amount of money that student owes.", "This document is intended to give a record of purchases and payments. It gives the card holder a summary of how much the card has been used during the billing period, as well as the amount that is due for that billing cycle." ]; var incorrectPromptTexts = [ "Here's a hint, there are dollar amounts on there and itemized activity, what type of document best fit these?", "Think about why you would need a statement for bills, please try again." ]; var studentUnderstandTexts = [ "Got it! I always get these confused with my car insurance for some reason.", "Yes now that I think about it, having an itemized record is very helpful, even if it's quite annoying to always get these in the mail." ]; var positiveFeedbackTexts = [ "_user_ you answered correctly! Awesome job!", "_user_ you got the question correct!", "_user_ you answered correctly awesome job!", "_user_! You answered correctly!", "_user_! You answered the question right!", "_user_, You answered right!", "_user_, You answered the question right!", "_user_, you answered correctly! Keep it up!", "_user_, You answered the question right! Keep up the great work", "_user_ you are doing great! Keep it up!" ]; var negativeFeedbackTexts = [ "_user_, sorry. the answer you gave wasn't what I was looking for.", "_user_, your answer was not quite right.", "_user_, It looks like you picked the wrong answer.", "_user_, I'm afraid the answer you chose wasn't the best one." ]; var player = GetPlayer(); // Log the text being set for each variable var correctText = correctTexts[slideNumber - 1]; console.log("Setting Correct to: " + correctText); player.SetVar("Correct", correctText); var incorrectPromptText = incorrectPromptTexts[slideNumber - 1]; console.log("Setting IncorrectPrompt to: " + incorrectPromptText); player.SetVar("IncorrectPrompt", incorrectPromptText); var studentUnderstandText = studentUnderstandTexts[slideNumber - 1]; console.log("Setting StudentUnderstand to: " + studentUnderstandText); player.SetVar("StudentUnderstand", studentUnderstandText); // Randomly select positive and negative feedback var randomPositiveFeedback = positiveFeedbackTexts[Math.floor(Math.random() * positiveFeedbackTexts.length)]; var randomNegativeFeedback = negativeFeedbackTexts[Math.floor(Math.random() * negativeFeedbackTexts.length)]; console.log("Setting PositiveFeedbacktoUser to: " + randomPositiveFeedback); player.SetVar("PositiveFeedbacktoUser", randomPositiveFeedback); console.log("Setting NegativeFeedbacktoUser to: " + randomNegativeFeedback); player.SetVar("NegativeFeedbacktoUser", randomNegativeFeedback); } // Call the function with the current slide number setTexts(GetPlayer().GetCurrentSlide().GetSlideNumber()); When I preview the project, the console provides the following message: bootstrapper.min.js:2 actionator::exeJavaScript - Script1 is not defined I have defined the setTexts function and ensured that it is called correctly in the "Execute JavaScript" trigger, but the error persists. Steps I've Taken: Defined thesetTextsfunction. Added the function callsetTexts(GetPlayer().GetCurrentSlide().GetSlideNumber());in the "Execute JavaScript" trigger. Verified that the function is being called correctly. Any help or suggestions on how to resolve this issue would be greatly appreciated. Thank you!44Views1like2CommentsIntegrate HeyGen Interactive Avatar API into Storyline 360 Rise Block.
Has anyone successfully integrated a chatbot or interactive avatar with a large knowledge base into Storyline 360 or RISE? I am trying to figure it all out but there are so many different parts, and I am truly new to working with APIs. Any advice would be appreciated.Solved86Views1like3CommentsZoom Region Feature
How often do you use the Zoom Region feature in Storyline? I’ve noticed that I don’t tend to use the Zoom Region feature that often. One of the main reasons is that it only allows us to zoom in on a fixed part of the screen, which can feel a bit restrictive when designing engaging interactions. Wouldn't it be great if we could scale objects to any size we wanted and move them freely across the screen? After some experimenting, I realized that with a little bit of JavaScript and some simple math, we can do just that! By using this technique, you can give your learners a more dynamic experience without being limited to the standard zoom functionality. Want to see it in action? Preview this feature via this link. If you're curious to try it out yourself, you can also download the source file, which includes the JavaScript code I used. Download the file here and give it a go! I'd love to hear your thoughts and see how others in the community are pushing the boundaries of Storyline. Have you ever used custom code or other creative solutions to enhance your projects?31Views1like0Comments