e-learning development
1554 TopicsLayers Overlapping in Storyline360
On one of my courses, the base layer slide has 14 buttons, each which opens an associated layer. A few of those layers link to other layers with additional information. This has worked since the course was developed in 2023. When I reviewed it today, not so much. When I click each of the 14 buttons (without clicking links to sub-layers), it works fine. Each layer appears as it should. However, when I click a button on one of the sub-layers to direct it back to the base layer, the layers start stacking, as in the attached screen capture. This occurs on slides 1.1 and 1.2, buttons 2 (Trace to Risk Record) & 10 (Mitigated Probability of Occurrence). All layers have the "Hide other slide layers" box checked. Before I change my layout and make each layer its own slide, I'm hoping there's a quicker fix.360 images: no annotations?
It is bizarre to me that you can't add simple text objects to Storyline immersive training lessons. I'll add an example from something I developed using an older tool. In CenarioVR, adding a text annotation to a 360 image is only slightly more complex than adding text to a normal slide in Storyline. In SL, annotating a 360 image would seem to require my opening the image in Photoshop and adding it directly to the bitmap. It's pointless extra work for the developer and less flexible. Please, please tell me I'm missing something obvious and it's really easy to add text to the image.12Views0likes4CommentsSeekbar functionality issues on slides with "Resume to Saved State" layers
Hi everyone, I’ve run into an issue with the seekbar behavior. I have a slide where the layers are set to Resume to Saved State. When I revisit the slide after moving forward in the project, the seekbar doesn't track correctly. If I try to move the seekbar, it seems to control the base layer objects instead of the content on the current layer. Has anyone encountered this "overlap" issue before? I’ve attached a .story file to show what’s happening. Please review the attached file regarding the tabbing and question screens. On the question screen, the layers are failing to maintain their 'visited' state upon revisiting the slide; instead, it only displays the base layer. I would appreciate your help in ensuring the previously viewed layers remain visible when returning to this screen. Please let me know the best way to fix this glitch so that the seekbar accurately controls the active layer when the slide is revisited.AI text to voice hallucinating
My AI text to voice is omitting certain words and adding in random phrases that are not in the box of text I provide. I have to copy/paste, retype, etc, and generate the text about 5 times before I get the correct output. Can anyone advise on a fix?Solved109Views0likes7CommentsStoryline LMS bookmarking issue: resumes on random question instead of result slide
We are facing an issue with LMS bookmarking in our Articulate Storyline course and would appreciate your guidance. The course structure is as follows: Teach screens Assessment quiz Result slide Closing slide The issue occurs after the learner completes the assessment, regardless of whether they pass or fail. Expected behavior: After reaching the result slide, if the learner exits the course from the LMS and relaunches it, the course should resume from the result slide. Actual behavior: On relaunch, instead of opening the result slide, the course resumes on one of the assessment question slides, and the question is different each time. Sometimes it opens on Question 9, sometimes Question 11, 14, etc. Because of this, the learner is able to re-enter or retake the assessment, which should not happen once the result slide has been reached. We checked all course logic, slide properties, and triggers thoroughly and did not find any issue. As part of troubleshooting, we also redeveloped the complete assessment, but the issue still persists. Any suggestions would be highly appreciated. Thanks in advance!29Views0likes3Comments360 image: trigger or JavaScript to change view angle
This is an amazingly basic scripting task. I can set the initial view (but then the setting is ignored in version v3.113.36519.0), but I can't programmatically move the viewing angle to point where I want. This could be handled either by rotating the virtual camera to an actual 3D angle or by having an action to "point the camera" at a marker. Neither appears to exist. The program does this exact thing in guided tour mode, why not expose this functionality to the developer? Maybe it's available from JavaScript, but I can't locate an API for the 360 image functionality. If it exists, please point me toward it. Thank you.36Views0likes3CommentsParity Between AI and Manual Translation Workflows, and AI Translation Quality Concerns
I want to start by saying that the new Articulate Localization tool is genuinely impressive. The ability to manage multiple language versions as a single course package is exactly the kind of workflow improvement our team has needed, and the in-context validation via Review is a great touch. That said, I'm running into significant gaps that are creating real problems for a current project, and I want to raise three interconnected issues. 1. Poor AI quality forces a manual workflow that breaks the multi-language learner experience We're building a course that requires both Hindi and Bengali for our client's learners, with the requirement that learners can select their preferred language within the course itself — a single item in the LMS, not two separate courses. Both languages are available through AI translation. However, following a formal Language Quality Assessment of the Hindi AI output (detailed in point 2), the quality is not at a standard we can publish. As a result, we'll be using our internal globalisation team to provide human translations for both Hindi and Bengali — at significantly higher cost to our client. Here's where it becomes a compounding problem: the manual XLIFF process produces standalone duplicate courses. It doesn't slot into the multi-language course stack the way AI translations do. That means we cannot offer learners a language toggle within a single course — we would have to publish two entirely separate courses in the LMS and ask learners to self-select the right one. That is a worse learner experience, harder to manage, and not what our client asked for. To be clear: this situation was directly caused by the AI translation quality not being fit for purpose. We started this project intending to use Articulate Localization end-to-end. The tool's own output has pushed us onto a manual workflow that the tool doesn't fully support — and our client is the one bearing the cost of that, both financially and in terms of experience. The fix we need: allow manually translated XLIFF files to be imported into the multi-language course stack, not just as standalone duplicates. If we're providing validated human translations, we should be able to manage them within the same course package and give learners the language-selection experience the tool is designed to deliver. 2. AI translation quality for Hindi (hi-IN) is below acceptable thresholds We had the Hindi AI output formally assessed by our globalisation team using a Language Quality Assessment (LQA) framework against the WalkMe + Training profile (≥1,000–4,000 words), applied to a 2,860-word electrical safety course (en-US → hi-IN). The overall verdict: the translation is not suitable for customer-facing content. Error summary by category Category Minor Major Notes Fluency 14 (+ 2 repeated) 0 Largest volume; affects naturalness throughout Omission 2 1 (+ 1 repeated) Most severe — content is dropped Inconsistency 2 (+ 2 repeated) 0 Systemic terminology variance Inconsistent with termbase 3 (+ 1 repeated) 0 Termbase not followed Punctuation 1 (+ 3 repeated) 0 Devanagari punctuation misused Mistranslation 2 0 Grammar 1 0 What this means in practice Our localisation team reviewed the output qualitatively alongside the LQA scorecard and identified five compounding issues: Clarity and readability Much of the content is technically understandable but does not read like natural, professional Hindi. Sentences are awkward or overly literal, which makes the training hard to follow. The LQA flagged 16 fluency errors across the sampled content — including several that required substantial rewrites in the corrected version, not for accuracy but for basic readability. Missing critical information In multiple places, important safety instructions are partially or fully absent. The clearest example: the navigational instruction "Please ensure you have flipped all cards, watched the video, and opened the transcript before moving on" was rendered as "Please ensure you have flipped all the cards before moving on" — the video and transcript steps dropped entirely. This segment appeared twice in the course, and the omission occurred both times. This is not a cosmetic issue. Learners following the Hindi version could skip key actions or misunderstand safety procedures as a direct result. Meaning changes Some phrases are mistranslated, particularly around risk and mitigation. The LQA flagged two mistranslation errors in the sampled content alone. Even small wording changes in this context can weaken or alter safety messages — which is unacceptable in high-risk, electrical-safety training. Inconsistent use of key terms Key concepts — equipment names, safety gear, risk terminology — are not used consistently. "High Voltage" alone appears both as हाई वोल्टेज (transliterated) and उच्च वोल्टेज (translated) across different parts of the same course, with no consistent rule applied and the provided termbase not followed. The same idea appearing in different forms across a course is genuinely confusing for learners. Overall brand and safety risk The combined effect is a course that does not meet the standard of a polished, trustworthy training product. For a safety-critical topic, this introduces reputational risk for the content owner and potential compliance and safety risk if learners misunderstand or fail to fully absorb the guidance. We would not be comfortable publishing this output without significant human rework. Our recommendation We recognise that the in-context validation feature in Review is Articulate's human-in-the-loop step, and we appreciate that it exists. The problem is that it can only be effective if the base AI output is of a standard that a reviewer can reasonably work with. What we received was not that. When a translation has major omissions, meaning changes, and systematic terminology failures throughout, the Review step stops being a validation pass and becomes a full retranslation effort — one being carried out by people who may not be professional translators, without the tooling or context that a language service provider would have. That's not a sustainable or safe quality control mechanism for safety-critical content. Our globalisation team's recommendation is that Articulate either improve the AI output quality to a standard where Review can function as intended, or explicitly position the output as a machine translation post-editing (MTPE) starting point — and set user expectations accordingly. Right now, the workflow implies a level of AI quality that our experience suggests isn't there, at least for Hindi. 3. "Bangla" should be labelled as "Bengali" (or both) A small but important usability point: the language is listed in the tool as "Bangla" rather than "Bengali." While Bangla is the correct native name for the language, Bengali is the standard English name used across the L&D industry, by language service providers, in ISO language codes (bn), and in most professional translation contexts. In practice, this caused real confusion on our project — we initially concluded that Bengali wasn't supported at all and were prepared to raise it as a missing language. We only discovered it was available by chance. If that happened to us, it will happen to others, and some won't catch the error before making decisions based on it. A simple fix would be to list it as "Bengali (Bangla)" or add "Bengali" as a searchable alias. This is a discoverability issue, not a technical one — but it has real consequences for users trying to plan multilingual projects. Allow manually translated XLIFF imports to be added to the multi-language course stack (not just as standalone duplicates) Investigate and address Hindi (hi-IN) AI translation quality — particularly around omission, fluency, and termbase compliance Consider clearer guidance or workflow support for MTPE as an intermediate option between raw AI output and full human translation Relabel "Bangla" as "Bengali (Bangla)" or add Bengali as a searchable alias — the current labelling causes users to incorrectly conclude the language isn't supported We're genuinely invested in making Articulate Localization work for our projects. These issues are the main barriers right now. Thanks for the tool and for taking this feedback seriously.42Views0likes0CommentsEmbed Youtube Video - Error 153
Hello, Has something changed in Storyline?! I'm trying to embed a YouTube video using "iframe". I'm copying the embed code from YouTube, but it says 'Watch video on Youtube - Erro 153...'. I used: <iframe width="560" height="315" src="https://www.youtube.com/embed/dqaN76shzJE?si=F_NwSZruLu5BJAU6" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> Any/all help would be appreciated. Thanks284Views0likes5CommentsAI Assistant & Localization in Rise- WONDERFUL!!!
I have created two courses using AI Assistant and Localization, and I am over the moon excited! The process was so very time-consuming, and the translation, per my language expert checker, was spot on! I’m a department of one supporting multiple car rental brands in 28 countries. Hooray for Localization! At least for the Spanish translation. Working on my third course now, using AI Assistant and Localization. I could never have produced three interactive courses in two different languages before these new enhancements in Articulate 360, in under two weeks. Never, never, never! Thank you, Articulate 360 team, not just for the two enhancements, but for making them very easy to use. You certainly kept it simple, streamlined, and focused on your user audience’s needs. AI Assistant is not an ordinary AI Assistant. It is a Trainer’s AI Assistant! Okay, Luminaries, you have another Articulate 360 cheerleader! (smile)🤩 Pat The Trainer44Views0likes2Comments