general e-learning
3013 TopicsParity Between AI and Manual Translation Workflows, and AI Translation Quality Concerns
I want to start by saying that the new Articulate Localization tool is genuinely impressive. The ability to manage multiple language versions as a single course package is exactly the kind of workflow improvement our team has needed, and the in-context validation via Review is a great touch. That said, I'm running into significant gaps that are creating real problems for a current project, and I want to raise three interconnected issues. 1. Poor AI quality forces a manual workflow that breaks the multi-language learner experience We're building a course that requires both Hindi and Bengali for our client's learners, with the requirement that learners can select their preferred language within the course itself — a single item in the LMS, not two separate courses. Both languages are available through AI translation. However, following a formal Language Quality Assessment of the Hindi AI output (detailed in point 2), the quality is not at a standard we can publish. As a result, we'll be using our internal globalisation team to provide human translations for both Hindi and Bengali — at significantly higher cost to our client. Here's where it becomes a compounding problem: the manual XLIFF process produces standalone duplicate courses. It doesn't slot into the multi-language course stack the way AI translations do. That means we cannot offer learners a language toggle within a single course — we would have to publish two entirely separate courses in the LMS and ask learners to self-select the right one. That is a worse learner experience, harder to manage, and not what our client asked for. To be clear: this situation was directly caused by the AI translation quality not being fit for purpose. We started this project intending to use Articulate Localization end-to-end. The tool's own output has pushed us onto a manual workflow that the tool doesn't fully support — and our client is the one bearing the cost of that, both financially and in terms of experience. The fix we need: allow manually translated XLIFF files to be imported into the multi-language course stack, not just as standalone duplicates. If we're providing validated human translations, we should be able to manage them within the same course package and give learners the language-selection experience the tool is designed to deliver. 2. AI translation quality for Hindi (hi-IN) is below acceptable thresholds We had the Hindi AI output formally assessed by our globalisation team using a Language Quality Assessment (LQA) framework against the WalkMe + Training profile (≥1,000–4,000 words), applied to a 2,860-word electrical safety course (en-US → hi-IN). The overall verdict: the translation is not suitable for customer-facing content. Error summary by category Category Minor Major Notes Fluency 14 (+ 2 repeated) 0 Largest volume; affects naturalness throughout Omission 2 1 (+ 1 repeated) Most severe — content is dropped Inconsistency 2 (+ 2 repeated) 0 Systemic terminology variance Inconsistent with termbase 3 (+ 1 repeated) 0 Termbase not followed Punctuation 1 (+ 3 repeated) 0 Devanagari punctuation misused Mistranslation 2 0 Grammar 1 0 What this means in practice Our localisation team reviewed the output qualitatively alongside the LQA scorecard and identified five compounding issues: Clarity and readability Much of the content is technically understandable but does not read like natural, professional Hindi. Sentences are awkward or overly literal, which makes the training hard to follow. The LQA flagged 16 fluency errors across the sampled content — including several that required substantial rewrites in the corrected version, not for accuracy but for basic readability. Missing critical information In multiple places, important safety instructions are partially or fully absent. The clearest example: the navigational instruction "Please ensure you have flipped all cards, watched the video, and opened the transcript before moving on" was rendered as "Please ensure you have flipped all the cards before moving on" — the video and transcript steps dropped entirely. This segment appeared twice in the course, and the omission occurred both times. This is not a cosmetic issue. Learners following the Hindi version could skip key actions or misunderstand safety procedures as a direct result. Meaning changes Some phrases are mistranslated, particularly around risk and mitigation. The LQA flagged two mistranslation errors in the sampled content alone. Even small wording changes in this context can weaken or alter safety messages — which is unacceptable in high-risk, electrical-safety training. Inconsistent use of key terms Key concepts — equipment names, safety gear, risk terminology — are not used consistently. "High Voltage" alone appears both as हाई वोल्टेज (transliterated) and उच्च वोल्टेज (translated) across different parts of the same course, with no consistent rule applied and the provided termbase not followed. The same idea appearing in different forms across a course is genuinely confusing for learners. Overall brand and safety risk The combined effect is a course that does not meet the standard of a polished, trustworthy training product. For a safety-critical topic, this introduces reputational risk for the content owner and potential compliance and safety risk if learners misunderstand or fail to fully absorb the guidance. We would not be comfortable publishing this output without significant human rework. Our recommendation We recognise that the in-context validation feature in Review is Articulate's human-in-the-loop step, and we appreciate that it exists. The problem is that it can only be effective if the base AI output is of a standard that a reviewer can reasonably work with. What we received was not that. When a translation has major omissions, meaning changes, and systematic terminology failures throughout, the Review step stops being a validation pass and becomes a full retranslation effort — one being carried out by people who may not be professional translators, without the tooling or context that a language service provider would have. That's not a sustainable or safe quality control mechanism for safety-critical content. Our globalisation team's recommendation is that Articulate either improve the AI output quality to a standard where Review can function as intended, or explicitly position the output as a machine translation post-editing (MTPE) starting point — and set user expectations accordingly. Right now, the workflow implies a level of AI quality that our experience suggests isn't there, at least for Hindi. 3. "Bangla" should be labelled as "Bengali" (or both) A small but important usability point: the language is listed in the tool as "Bangla" rather than "Bengali." While Bangla is the correct native name for the language, Bengali is the standard English name used across the L&D industry, by language service providers, in ISO language codes (bn), and in most professional translation contexts. In practice, this caused real confusion on our project — we initially concluded that Bengali wasn't supported at all and were prepared to raise it as a missing language. We only discovered it was available by chance. If that happened to us, it will happen to others, and some won't catch the error before making decisions based on it. A simple fix would be to list it as "Bengali (Bangla)" or add "Bengali" as a searchable alias. This is a discoverability issue, not a technical one — but it has real consequences for users trying to plan multilingual projects. Allow manually translated XLIFF imports to be added to the multi-language course stack (not just as standalone duplicates) Investigate and address Hindi (hi-IN) AI translation quality — particularly around omission, fluency, and termbase compliance Consider clearer guidance or workflow support for MTPE as an intermediate option between raw AI output and full human translation Relabel "Bangla" as "Bengali (Bangla)" or add Bengali as a searchable alias — the current labelling causes users to incorrectly conclude the language isn't supported We're genuinely invested in making Articulate Localization work for our projects. These issues are the main barriers right now. Thanks for the tool and for taking this feedback seriously.41Views0likes0CommentsEmbed Youtube Video - Error 153
Hello, Has something changed in Storyline?! I'm trying to embed a YouTube video using "iframe". I'm copying the embed code from YouTube, but it says 'Watch video on Youtube - Erro 153...'. I used: <iframe width="560" height="315" src="https://www.youtube.com/embed/dqaN76shzJE?si=F_NwSZruLu5BJAU6" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> Any/all help would be appreciated. Thanks283Views0likes5CommentsIssue with questions being linked to the wrong result slide
Hello everyone, I am currently working on an assessment structure in Articulate and encountering an issue with how my result pages are organized. I have several evaluation components (e.g., 5.1 Comp_COM, 5.2 Comp_COS, 5.3 Comp_LTE, 5.4 Comp_GES, 5.5 Comp_SOI, etc.), each intended to be associated with a specific score category. However, when I edit the questions (even before publishing the module), some evaluations seem to be automatically reassigned to the global score (6.1 Score_Global) instead of remaining linked to their respective sections (e.g., 5.4 or 5.5). As a result, I have to manually check each question to ensure everything is still correctly assigned. I would like to understand: How to control or lock the association between questions / result slides and their corresponding score variables Why some evaluations automatically switch to the wrong result slides Whether there is a recommended method to ensure a stable and consistent structure across multiple result slides Thank you in advance for your support !! 😀85Views0likes4CommentsOpinion: Why "High-Energy" Videos Might Be Hurting Your Students' Retention (The Motion Trap)
We need to talk about the "TikTok-ification" of education. In 2026, the trend is clear: "Make it faster. Make it move. Add more cuts." The assumption is that if a video is dynamic (like VEO or Kling outputs), engagement goes up, and therefore learning goes up. But as someone deep in the EdTech trenches, I’ve looked at the cognitive science papers, and the results are worrying. We call it "The Motion Trap." Here is why "boring" slides might actually beat "cinematic" AI video when it comes to deep learning. 1. The Science: "Transient Information Effect" Learning isn't about staring at a screen; it's about processing information in working memory. According to research on multimedia learning (like Mayer’s principles), full-motion video creates a "Transiency Effect." Information appears, moves, and disappears constantly. - The learner’s brain has to spend energy just to "catch" the visual before it vanishes. - This creates a cognitive bottleneck. If you are teaching a complex concept (e.g., Quantum Physics or a Corporate SOP), a fast-paced AI video floods the brain. The student feels entertained ("I get the vibe"), but tomorrow, they can’t recall the specific steps. 2. The Solution: "Visual Narratives" (Why Slides Win) This is why slide-based Visual Narratives often outperform full-motion video for retention. - Stable Anchors: A slide stays on screen. The eye can scan the diagram while the ear listens to the explanation. - Segmentation: It forces the content into "chunks" (One slide = One idea). - Signaling: You can use a static arrow or highlight to say "Look here," which is harder to do in a constantly moving video. 3. The Decision Matrix: When to use Motion? I use this simple matrix before creating any courseware. You can steal it: A) Goal: Concept Mastery (Definitions, History, Principles) - Use: Visual Narrative (Slides). - Why: Students need time to absorb the structure. Motion is a distraction here. B) Goal: Physical Procedures (How to change a tire) - Use: Motion (but slowed down). - Why: The movement is the lesson. C) Goal: Corporate SOP / Compliance - Use: Visual Narrative. - Why: Employees need to follow steps. A continuous stream of video makes it hard to pause/review specific steps.66Views4likes4CommentsTurn on/off (control) the checkmarks in the menu
Hi Everyone, This question comes up more and more often in our ID, company and learner community (many learner comments about the misleading checkmarks). And I know that it is a hot topic for all of us and for years now. Eg. Check / Tick marks in the menu | Articulate - Community Premise: So the premise is that we need to have control over the checkmarks in the menu. Now, the tick shows up immediately after a slide starts, which gives a false image of the slide completion for the learners. The checkmarks should appear only if the actual slide is completed by a condition - regardless of its type: timeline starts/ends, click on a button=states, anything else. (See this later.) In the LMS publishing options when you set up the course completion for xy% of slides viewed, to count the slides it uses the same idea as the checks have now: when the timeline starts it becomes immediately counted - probably connected. I think this is okay there, because you can play with it. But here we have other function for the slide ticks in the menu. UX/UI perspective: If you have a checkmark next to the slide title in the menu it means, from every perspective, that the slide is viewed and completed, you don't have to do nothing else. The same idea is used everywhere else: In any app or online platform a tick shows that a field is completed or not, a question is answered or not, or you need to scroll down the agreement or contract to change the tick to understand the terms etc. Tick means completed: you don't need to do anything else with the checked part. Our learners (and us too) are adapted to this meaning. If the actual intention with the checkmarks is to show that a slide is visited, I am afraid that it can lead astray. We see it as a completion. In Rise the lesson completion in the menu works perfectly fine: an icon shows that something is in progress in the a given part and when it is done it receives a tick. The desired functions: In case of the menu checkmarks we should be able to -turn it on and off, and -set up a condition when to activate the checkmark of the slide in the menu. Eg. I turn the automatic menu ticking off. > After I set up this option in the slide properties: when timeline starts, when timeline ends. > Or I could also initiate it with a custom trigger. Dear Articulate, do you have plans to change this automatic marking and give control to the designers? I know that it is really complicated and not easy at all from your developer side. And it also can causes problems if we users forgot to set it up, but I think it is a must have from ux/ui perspective. Alternatives: In the article discussion above (and on the web) we have few css and javascript option to turn it off completely in the menu. But in this cases I think we can fall to the other slide: the menu will not show slide completion at all. I seriously don't know which is the better option. Our users are already adapted to the checkmarks in the menu - elsewhere and in our learnings too. Just the timing is not perfect. Question: And here, after this wall of text I arrived to my personal silly question. :) I am not good in Javascript at all, and probably it is not even possible or you already came up with it. But this could be a good workaround. I played with javascript triggers: -I used the mentioned javascript trigger form the article above to remove the ticks (when timeline starts), it worked perfectly. -And after I used an other javascript trigger to turn it on when timeline ends. Also worked. -But of course when I go to the next slide, this pair of triggers will cause mess, the first will make the already ticked slide title in the menu disappear. Which is pretty obvious. But then a question came to my mind: Is there an option or possibility in javascript to make a trigger where we can turn off and on the checkmarks ONLY on the given line (slide) in the menu? --- I think we really need a solution or at least a specific and official workaround for this basic UX/UI feature. Thank you (Articulate and our great Heroes Community) in advance for your help! Wishing you all the best and great designs! Tibi201Views0likes3CommentsAudio-Button functions and states within selfbuild WBT menue and options
Hi everyone, currently I am working on a selfbuild WBT menue with own options on buttons (like refresh, fullscreen, turn on/off subtitles, etc.). This also includes a button for un/muting audio. (Screenshot "Menue 1") Until now I tried a lot; made my way through own ideas, tutorials in various forum posts or videos, asked colleagues, tried prompts in Copilot and so on. The thing is, I always get stuck evertime at the same point. Therefore, I am reaching out to you in hope that you can help me out. My setup is the following - first slide in master slide view: I have a button that consists of these elements grouped together: a circular “Ellipse” element, and above it a vector graphic of an ear. This vector graphic has the normal state (regular ear Icon) and the state selected (ear Icon crossed out). (Screenshot Icon states) The Ellipse has also a second state for hovering with a slight shadow around the circle. Then, to make things even a little bit more complicated, when the user can either click this button directly from the side menue. But if they navigate to the "Menue"-Button, the side menue will expand, laid out on another layer (Screenshot "Menue 2 Expanded"). Next to each button is a section reading the function of the button and thanks to a hotspot over each section, it is also clickable. So the button must be functioning and changing it's states in both ways: regular menue and expanded menue - and from the expanded menue as well. Goal; what should be the effects by clicking the Audio button? As soon as the user clicks on the button group, the audio on the slide should be muted and the state of the vector graphic should change: an ear crossed out. When the user clicks this button again, the audio should be unmuted, and the vector graphic should return to its original state (the normal ear). So, the audio should not be stopped and resumed: The audio should be "playing" without sound in the background with continuous timeline. A slide can also contain multiple consecutive audio files that play one after the other. The audio button un/mutes all of them when the user clicks the button. And when the user moves to the next slide, the WBT should remember that the selected function (audio is turned off/on) and show the ear icon accordingly (as crossed out/normal). Point 3 and 4 shall also apply for the audios of videos: So the video-part is still going on while the audio can be muted/unmuted. --> To make it trickier: This should also work for the case when there are audios and videos on one slide. Has anyone an idea how to solve this riddle and incorporate all the required speficiations? I would be beyond grateful. I uploaded a scrubbed, condensed version of my SL project file. So that you can have a look on the setup and previous as well as actual triggers, Javascripts etc. Thanks in advance for your support Best regardsSolved92Views0likes5CommentsBig problem with folder usage and organization.
As a company, we often use folders to maintain an organized folder structure when creating courses. The big problem is that for a quick search, we use the search bar to look for courses by name, but the indexed courses do not show the folder they are in; at least the folder where the course is located should appear.Solved111Views0likes4Comments