general e-learning
3016 TopicsVideo Publishing Error
I have a course that I tried to publish as a video. In it, on one slide I have a variable increasing in value and that value being displayed. If I publish it as a course, it works fine. If I try to publish it as a video, the numbers do not show correctly. I'm guessing the video publishing is sluggish in recognizing the variable changes or something along those lines. Anyone have any ideas? (I'm getting around the issue by doing a screen record of the version published as a course that shows correctly) I've attached a version of the offending slide if anyone wants to take a look.Parity Between AI and Manual Translation Workflows, and AI Translation Quality Concerns
I want to start by saying that the new Articulate Localization tool is genuinely impressive. The ability to manage multiple language versions as a single course package is exactly the kind of workflow improvement our team has needed, and the in-context validation via Review is a great touch. That said, I'm running into significant gaps that are creating real problems for a current project, and I want to raise three interconnected issues. 1. Poor AI quality forces a manual workflow that breaks the multi-language learner experience We're building a course that requires both Hindi and Bengali for our client's learners, with the requirement that learners can select their preferred language within the course itself — a single item in the LMS, not two separate courses. Both languages are available through AI translation. However, following a formal Language Quality Assessment of the Hindi AI output (detailed in point 2), the quality is not at a standard we can publish. As a result, we'll be using our internal globalisation team to provide human translations for both Hindi and Bengali — at significantly higher cost to our client. Here's where it becomes a compounding problem: the manual XLIFF process produces standalone duplicate courses. It doesn't slot into the multi-language course stack the way AI translations do. That means we cannot offer learners a language toggle within a single course — we would have to publish two entirely separate courses in the LMS and ask learners to self-select the right one. That is a worse learner experience, harder to manage, and not what our client asked for. To be clear: this situation was directly caused by the AI translation quality not being fit for purpose. We started this project intending to use Articulate Localization end-to-end. The tool's own output has pushed us onto a manual workflow that the tool doesn't fully support — and our client is the one bearing the cost of that, both financially and in terms of experience. The fix we need: allow manually translated XLIFF files to be imported into the multi-language course stack, not just as standalone duplicates. If we're providing validated human translations, we should be able to manage them within the same course package and give learners the language-selection experience the tool is designed to deliver. 2. AI translation quality for Hindi (hi-IN) is below acceptable thresholds We had the Hindi AI output formally assessed by our globalisation team using a Language Quality Assessment (LQA) framework against the WalkMe + Training profile (≥1,000–4,000 words), applied to a 2,860-word electrical safety course (en-US → hi-IN). The overall verdict: the translation is not suitable for customer-facing content. Error summary by category Category Minor Major Notes Fluency 14 (+ 2 repeated) 0 Largest volume; affects naturalness throughout Omission 2 1 (+ 1 repeated) Most severe — content is dropped Inconsistency 2 (+ 2 repeated) 0 Systemic terminology variance Inconsistent with termbase 3 (+ 1 repeated) 0 Termbase not followed Punctuation 1 (+ 3 repeated) 0 Devanagari punctuation misused Mistranslation 2 0 Grammar 1 0 What this means in practice Our localisation team reviewed the output qualitatively alongside the LQA scorecard and identified five compounding issues: Clarity and readability Much of the content is technically understandable but does not read like natural, professional Hindi. Sentences are awkward or overly literal, which makes the training hard to follow. The LQA flagged 16 fluency errors across the sampled content — including several that required substantial rewrites in the corrected version, not for accuracy but for basic readability. Missing critical information In multiple places, important safety instructions are partially or fully absent. The clearest example: the navigational instruction "Please ensure you have flipped all cards, watched the video, and opened the transcript before moving on" was rendered as "Please ensure you have flipped all the cards before moving on" — the video and transcript steps dropped entirely. This segment appeared twice in the course, and the omission occurred both times. This is not a cosmetic issue. Learners following the Hindi version could skip key actions or misunderstand safety procedures as a direct result. Meaning changes Some phrases are mistranslated, particularly around risk and mitigation. The LQA flagged two mistranslation errors in the sampled content alone. Even small wording changes in this context can weaken or alter safety messages — which is unacceptable in high-risk, electrical-safety training. Inconsistent use of key terms Key concepts — equipment names, safety gear, risk terminology — are not used consistently. "High Voltage" alone appears both as हाई वोल्टेज (transliterated) and उच्च वोल्टेज (translated) across different parts of the same course, with no consistent rule applied and the provided termbase not followed. The same idea appearing in different forms across a course is genuinely confusing for learners. Overall brand and safety risk The combined effect is a course that does not meet the standard of a polished, trustworthy training product. For a safety-critical topic, this introduces reputational risk for the content owner and potential compliance and safety risk if learners misunderstand or fail to fully absorb the guidance. We would not be comfortable publishing this output without significant human rework. Our recommendation We recognise that the in-context validation feature in Review is Articulate's human-in-the-loop step, and we appreciate that it exists. The problem is that it can only be effective if the base AI output is of a standard that a reviewer can reasonably work with. What we received was not that. When a translation has major omissions, meaning changes, and systematic terminology failures throughout, the Review step stops being a validation pass and becomes a full retranslation effort — one being carried out by people who may not be professional translators, without the tooling or context that a language service provider would have. That's not a sustainable or safe quality control mechanism for safety-critical content. Our globalisation team's recommendation is that Articulate either improve the AI output quality to a standard where Review can function as intended, or explicitly position the output as a machine translation post-editing (MTPE) starting point — and set user expectations accordingly. Right now, the workflow implies a level of AI quality that our experience suggests isn't there, at least for Hindi. 3. "Bangla" should be labelled as "Bengali" (or both) A small but important usability point: the language is listed in the tool as "Bangla" rather than "Bengali." While Bangla is the correct native name for the language, Bengali is the standard English name used across the L&D industry, by language service providers, in ISO language codes (bn), and in most professional translation contexts. In practice, this caused real confusion on our project — we initially concluded that Bengali wasn't supported at all and were prepared to raise it as a missing language. We only discovered it was available by chance. If that happened to us, it will happen to others, and some won't catch the error before making decisions based on it. A simple fix would be to list it as "Bengali (Bangla)" or add "Bengali" as a searchable alias. This is a discoverability issue, not a technical one — but it has real consequences for users trying to plan multilingual projects. Allow manually translated XLIFF imports to be added to the multi-language course stack (not just as standalone duplicates) Investigate and address Hindi (hi-IN) AI translation quality — particularly around omission, fluency, and termbase compliance Consider clearer guidance or workflow support for MTPE as an intermediate option between raw AI output and full human translation Relabel "Bangla" as "Bengali (Bangla)" or add Bengali as a searchable alias — the current labelling causes users to incorrectly conclude the language isn't supported We're genuinely invested in making Articulate Localization work for our projects. These issues are the main barriers right now. Thanks for the tool and for taking this feedback seriously.66Views0likes1CommentBehind the Scenes: Storyline’s Move to Modern .NET
We just wrapped a project that’s been hanging over Storyline for a long time: Moving from .NET Framework 4.8 to modern .NET (now .NET 10). This one goes deeper than it might sound. Back when Storyline was first built, choosing .NET Framework was the obvious call. This was 2010-ish. Windows dominated our space, and the .NET ecosystem gave us a lot of what we needed to move fast and build a really capable tool. That decision worked. For a long time. It also shaped some of the realities of the product today. Questions about platform support come up a lot, and early architectural choices like this are a big part of that story. They helped us move fast early on, but they also made certain paths more complex later. Fast forward to now… Microsoft has effectively stopped evolving .NET Framework and put their energy into modern .NET. Meanwhile, we were still running on a foundation that wasn’t keeping pace with where things were going. So we made the call to move. This wasn’t a simple upgrade. We relied on parts of .NET Framework that don’t exist anymore. AppDomains. Binary serialization. A handful of “seemed like a great idea at the time” features that modern .NET intentionally left behind. We had to rethink and rebuild some pretty fundamental parts of the product. So what did all of this actually get us? We’re now on a modern, actively supported runtime. It’s easier for us to keep improving performance, adopt new capabilities, and evolve the platform without constantly working around legacy constraints. We also retired some very old pieces of the system along the way, which… felt pretty great 😅 And then there's performance. Microsoft has invested heavily at performance improvements in modern .NET, and we're seeing that surface in Storyline. We ran benchmarks across 18 Storyline projects, measuring open, save, and publish times. Every single project got faster with improvements ranging from 0.4% to nearly 30%. The larger the project, the larger the improvement. In the animated gif below, I put .NET Framework (left) head-to-head with modern .NET publishing the same course. Neither project was pre-published to warm the cache, and I even gave .NET Framework a slight head start by clicking Publish there first. The gif is sped up for easier viewing, but the result is real: modern .NET finishes publishing well before .NET Framework. Big credit to the team that pulled this off. This was deep, risky work in some of the most critical parts of the product. Curious to hear from folks here: If you're on the latest Storyline 360, have you noticed any performance improvements when opening, saving, or publishing your projects?43Views4likes0CommentsCreate link to policies in Sharepoint
Hi clever community Complete novice here creating an induction program in Rise, and now trying to develop a section in Storylines where people are required to read company policies linked to an external sharepoint, and acknowledge they have read the policy before moving on to the next one. Any step-by-step guide/advice or pointing to a tutorial would be greatly appreciated! Thank you :)17Views0likes2CommentsIssue with questions being linked to the wrong result slide
Hello everyone, I am currently working on an assessment structure in Articulate and encountering an issue with how my result pages are organized. I have several evaluation components (e.g., 5.1 Comp_COM, 5.2 Comp_COS, 5.3 Comp_LTE, 5.4 Comp_GES, 5.5 Comp_SOI, etc.), each intended to be associated with a specific score category. However, when I edit the questions (even before publishing the module), some evaluations seem to be automatically reassigned to the global score (6.1 Score_Global) instead of remaining linked to their respective sections (e.g., 5.4 or 5.5). As a result, I have to manually check each question to ensure everything is still correctly assigned. I would like to understand: How to control or lock the association between questions / result slides and their corresponding score variables Why some evaluations automatically switch to the wrong result slides Whether there is a recommended method to ensure a stable and consistent structure across multiple result slides Thank you in advance for your support !! 😀95Views0likes5CommentsEmbed Youtube Video - Error 153
Hello, Has something changed in Storyline?! I'm trying to embed a YouTube video using "iframe". I'm copying the embed code from YouTube, but it says 'Watch video on Youtube - Erro 153...'. I used: <iframe width="560" height="315" src="https://www.youtube.com/embed/dqaN76shzJE?si=F_NwSZruLu5BJAU6" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> Any/all help would be appreciated. Thanks321Views0likes5CommentsOpinion: Why "High-Energy" Videos Might Be Hurting Your Students' Retention (The Motion Trap)
We need to talk about the "TikTok-ification" of education. In 2026, the trend is clear: "Make it faster. Make it move. Add more cuts." The assumption is that if a video is dynamic (like VEO or Kling outputs), engagement goes up, and therefore learning goes up. But as someone deep in the EdTech trenches, I’ve looked at the cognitive science papers, and the results are worrying. We call it "The Motion Trap." Here is why "boring" slides might actually beat "cinematic" AI video when it comes to deep learning. 1. The Science: "Transient Information Effect" Learning isn't about staring at a screen; it's about processing information in working memory. According to research on multimedia learning (like Mayer’s principles), full-motion video creates a "Transiency Effect." Information appears, moves, and disappears constantly. - The learner’s brain has to spend energy just to "catch" the visual before it vanishes. - This creates a cognitive bottleneck. If you are teaching a complex concept (e.g., Quantum Physics or a Corporate SOP), a fast-paced AI video floods the brain. The student feels entertained ("I get the vibe"), but tomorrow, they can’t recall the specific steps. 2. The Solution: "Visual Narratives" (Why Slides Win) This is why slide-based Visual Narratives often outperform full-motion video for retention. - Stable Anchors: A slide stays on screen. The eye can scan the diagram while the ear listens to the explanation. - Segmentation: It forces the content into "chunks" (One slide = One idea). - Signaling: You can use a static arrow or highlight to say "Look here," which is harder to do in a constantly moving video. 3. The Decision Matrix: When to use Motion? I use this simple matrix before creating any courseware. You can steal it: A) Goal: Concept Mastery (Definitions, History, Principles) - Use: Visual Narrative (Slides). - Why: Students need time to absorb the structure. Motion is a distraction here. B) Goal: Physical Procedures (How to change a tire) - Use: Motion (but slowed down). - Why: The movement is the lesson. C) Goal: Corporate SOP / Compliance - Use: Visual Narrative. - Why: Employees need to follow steps. A continuous stream of video makes it hard to pause/review specific steps.85Views4likes4CommentsTurn on/off (control) the checkmarks in the menu
Hi Everyone, This question comes up more and more often in our ID, company and learner community (many learner comments about the misleading checkmarks). And I know that it is a hot topic for all of us and for years now. Eg. Check / Tick marks in the menu | Articulate - Community Premise: So the premise is that we need to have control over the checkmarks in the menu. Now, the tick shows up immediately after a slide starts, which gives a false image of the slide completion for the learners. The checkmarks should appear only if the actual slide is completed by a condition - regardless of its type: timeline starts/ends, click on a button=states, anything else. (See this later.) In the LMS publishing options when you set up the course completion for xy% of slides viewed, to count the slides it uses the same idea as the checks have now: when the timeline starts it becomes immediately counted - probably connected. I think this is okay there, because you can play with it. But here we have other function for the slide ticks in the menu. UX/UI perspective: If you have a checkmark next to the slide title in the menu it means, from every perspective, that the slide is viewed and completed, you don't have to do nothing else. The same idea is used everywhere else: In any app or online platform a tick shows that a field is completed or not, a question is answered or not, or you need to scroll down the agreement or contract to change the tick to understand the terms etc. Tick means completed: you don't need to do anything else with the checked part. Our learners (and us too) are adapted to this meaning. If the actual intention with the checkmarks is to show that a slide is visited, I am afraid that it can lead astray. We see it as a completion. In Rise the lesson completion in the menu works perfectly fine: an icon shows that something is in progress in the a given part and when it is done it receives a tick. The desired functions: In case of the menu checkmarks we should be able to -turn it on and off, and -set up a condition when to activate the checkmark of the slide in the menu. Eg. I turn the automatic menu ticking off. > After I set up this option in the slide properties: when timeline starts, when timeline ends. > Or I could also initiate it with a custom trigger. Dear Articulate, do you have plans to change this automatic marking and give control to the designers? I know that it is really complicated and not easy at all from your developer side. And it also can causes problems if we users forgot to set it up, but I think it is a must have from ux/ui perspective. Alternatives: In the article discussion above (and on the web) we have few css and javascript option to turn it off completely in the menu. But in this cases I think we can fall to the other slide: the menu will not show slide completion at all. I seriously don't know which is the better option. Our users are already adapted to the checkmarks in the menu - elsewhere and in our learnings too. Just the timing is not perfect. Question: And here, after this wall of text I arrived to my personal silly question. :) I am not good in Javascript at all, and probably it is not even possible or you already came up with it. But this could be a good workaround. I played with javascript triggers: -I used the mentioned javascript trigger form the article above to remove the ticks (when timeline starts), it worked perfectly. -And after I used an other javascript trigger to turn it on when timeline ends. Also worked. -But of course when I go to the next slide, this pair of triggers will cause mess, the first will make the already ticked slide title in the menu disappear. Which is pretty obvious. But then a question came to my mind: Is there an option or possibility in javascript to make a trigger where we can turn off and on the checkmarks ONLY on the given line (slide) in the menu? --- I think we really need a solution or at least a specific and official workaround for this basic UX/UI feature. Thank you (Articulate and our great Heroes Community) in advance for your help! Wishing you all the best and great designs! Tibi209Views0likes3Comments