e-learning development
1558 TopicsVideo Publishing Error
I have a course that I tried to publish as a video. In it, on one slide I have a variable increasing in value and that value being displayed. If I publish it as a course, it works fine. If I try to publish it as a video, the numbers do not show correctly. I'm guessing the video publishing is sluggish in recognizing the variable changes or something along those lines. Anyone have any ideas? (I'm getting around the issue by doing a screen record of the version published as a course that shows correctly) I've attached a version of the offending slide if anyone wants to take a look.Storyline slide order doesn't match navigation pane
Hi all - I'm very green with Storyline and don't know what I'm doing wrong here. But when I preview my course the order of the slides on the navigation pane is not the correct order in how I've set them up in the course. I've confirmed all slides are numbered in the correct order and when I click play, they advance in the right order but the navigation on the left is off and so it looks like as slides advance the user is jumping all over the place, and also means the learner can't use the navigation accurately. Any ideas on how to fix this would be greatly appreciated! Thanks.27Views0likes2CommentsStoryline slide stuck loading
I'm having an issue where reviewers (both in Review360 and in my LMS sandbox — we're using Cornerstone) are getting stuck on a specific slide with the three loading dots. Refreshing the page (in Review360) or closing and resuming the course resolves the issue. When looking in Review360, the screenshots that the system auto-collect show the next screen, implying that the transition is successful on the "back end" but this doesn't display to the learner. I've seen the posts about results slides — that isn't applicable here. This slide is built from a template used 2 other times in the course — the issue is always on the last one, never the other two. It seems to only happen to each user once per platform, so testing is tricky. I'm using a hotspot as the trigger to change slides. The trigger is of the form "when the user clicks 'HS_Confirm1': Set (variable) to (other variable), Jump to slide (slide) if (other variable) =/= (blank)." See screenshot below. This trigger setup is identical to the slides that are working properly. The slide that is being loaded is also a copy of the slide that gets loaded in the other scenarios — the only difference is that the other 2 have video or audio that plays when the slide loads and this version does not.31Views0likes1CommentParity Between AI and Manual Translation Workflows, and AI Translation Quality Concerns
I want to start by saying that the new Articulate Localization tool is genuinely impressive. The ability to manage multiple language versions as a single course package is exactly the kind of workflow improvement our team has needed, and the in-context validation via Review is a great touch. That said, I'm running into significant gaps that are creating real problems for a current project, and I want to raise three interconnected issues. 1. Poor AI quality forces a manual workflow that breaks the multi-language learner experience We're building a course that requires both Hindi and Bengali for our client's learners, with the requirement that learners can select their preferred language within the course itself — a single item in the LMS, not two separate courses. Both languages are available through AI translation. However, following a formal Language Quality Assessment of the Hindi AI output (detailed in point 2), the quality is not at a standard we can publish. As a result, we'll be using our internal globalisation team to provide human translations for both Hindi and Bengali — at significantly higher cost to our client. Here's where it becomes a compounding problem: the manual XLIFF process produces standalone duplicate courses. It doesn't slot into the multi-language course stack the way AI translations do. That means we cannot offer learners a language toggle within a single course — we would have to publish two entirely separate courses in the LMS and ask learners to self-select the right one. That is a worse learner experience, harder to manage, and not what our client asked for. To be clear: this situation was directly caused by the AI translation quality not being fit for purpose. We started this project intending to use Articulate Localization end-to-end. The tool's own output has pushed us onto a manual workflow that the tool doesn't fully support — and our client is the one bearing the cost of that, both financially and in terms of experience. The fix we need: allow manually translated XLIFF files to be imported into the multi-language course stack, not just as standalone duplicates. If we're providing validated human translations, we should be able to manage them within the same course package and give learners the language-selection experience the tool is designed to deliver. 2. AI translation quality for Hindi (hi-IN) is below acceptable thresholds We had the Hindi AI output formally assessed by our globalisation team using a Language Quality Assessment (LQA) framework against the WalkMe + Training profile (≥1,000–4,000 words), applied to a 2,860-word electrical safety course (en-US → hi-IN). The overall verdict: the translation is not suitable for customer-facing content. Error summary by category Category Minor Major Notes Fluency 14 (+ 2 repeated) 0 Largest volume; affects naturalness throughout Omission 2 1 (+ 1 repeated) Most severe — content is dropped Inconsistency 2 (+ 2 repeated) 0 Systemic terminology variance Inconsistent with termbase 3 (+ 1 repeated) 0 Termbase not followed Punctuation 1 (+ 3 repeated) 0 Devanagari punctuation misused Mistranslation 2 0 Grammar 1 0 What this means in practice Our localisation team reviewed the output qualitatively alongside the LQA scorecard and identified five compounding issues: Clarity and readability Much of the content is technically understandable but does not read like natural, professional Hindi. Sentences are awkward or overly literal, which makes the training hard to follow. The LQA flagged 16 fluency errors across the sampled content — including several that required substantial rewrites in the corrected version, not for accuracy but for basic readability. Missing critical information In multiple places, important safety instructions are partially or fully absent. The clearest example: the navigational instruction "Please ensure you have flipped all cards, watched the video, and opened the transcript before moving on" was rendered as "Please ensure you have flipped all the cards before moving on" — the video and transcript steps dropped entirely. This segment appeared twice in the course, and the omission occurred both times. This is not a cosmetic issue. Learners following the Hindi version could skip key actions or misunderstand safety procedures as a direct result. Meaning changes Some phrases are mistranslated, particularly around risk and mitigation. The LQA flagged two mistranslation errors in the sampled content alone. Even small wording changes in this context can weaken or alter safety messages — which is unacceptable in high-risk, electrical-safety training. Inconsistent use of key terms Key concepts — equipment names, safety gear, risk terminology — are not used consistently. "High Voltage" alone appears both as हाई वोल्टेज (transliterated) and उच्च वोल्टेज (translated) across different parts of the same course, with no consistent rule applied and the provided termbase not followed. The same idea appearing in different forms across a course is genuinely confusing for learners. Overall brand and safety risk The combined effect is a course that does not meet the standard of a polished, trustworthy training product. For a safety-critical topic, this introduces reputational risk for the content owner and potential compliance and safety risk if learners misunderstand or fail to fully absorb the guidance. We would not be comfortable publishing this output without significant human rework. Our recommendation We recognise that the in-context validation feature in Review is Articulate's human-in-the-loop step, and we appreciate that it exists. The problem is that it can only be effective if the base AI output is of a standard that a reviewer can reasonably work with. What we received was not that. When a translation has major omissions, meaning changes, and systematic terminology failures throughout, the Review step stops being a validation pass and becomes a full retranslation effort — one being carried out by people who may not be professional translators, without the tooling or context that a language service provider would have. That's not a sustainable or safe quality control mechanism for safety-critical content. Our globalisation team's recommendation is that Articulate either improve the AI output quality to a standard where Review can function as intended, or explicitly position the output as a machine translation post-editing (MTPE) starting point — and set user expectations accordingly. Right now, the workflow implies a level of AI quality that our experience suggests isn't there, at least for Hindi. 3. "Bangla" should be labelled as "Bengali" (or both) A small but important usability point: the language is listed in the tool as "Bangla" rather than "Bengali." While Bangla is the correct native name for the language, Bengali is the standard English name used across the L&D industry, by language service providers, in ISO language codes (bn), and in most professional translation contexts. In practice, this caused real confusion on our project — we initially concluded that Bengali wasn't supported at all and were prepared to raise it as a missing language. We only discovered it was available by chance. If that happened to us, it will happen to others, and some won't catch the error before making decisions based on it. A simple fix would be to list it as "Bengali (Bangla)" or add "Bengali" as a searchable alias. This is a discoverability issue, not a technical one — but it has real consequences for users trying to plan multilingual projects. Allow manually translated XLIFF imports to be added to the multi-language course stack (not just as standalone duplicates) Investigate and address Hindi (hi-IN) AI translation quality — particularly around omission, fluency, and termbase compliance Consider clearer guidance or workflow support for MTPE as an intermediate option between raw AI output and full human translation Relabel "Bangla" as "Bengali (Bangla)" or add Bengali as a searchable alias — the current labelling causes users to incorrectly conclude the language isn't supported We're genuinely invested in making Articulate Localization work for our projects. These issues are the main barriers right now. Thanks for the tool and for taking this feedback seriously.66Views0likes1CommentBehind the Scenes: Storyline’s Move to Modern .NET
We just wrapped a project that’s been hanging over Storyline for a long time: Moving from .NET Framework 4.8 to modern .NET (now .NET 10). This one goes deeper than it might sound. Back when Storyline was first built, choosing .NET Framework was the obvious call. This was 2010-ish. Windows dominated our space, and the .NET ecosystem gave us a lot of what we needed to move fast and build a really capable tool. That decision worked. For a long time. It also shaped some of the realities of the product today. Questions about platform support come up a lot, and early architectural choices like this are a big part of that story. They helped us move fast early on, but they also made certain paths more complex later. Fast forward to now… Microsoft has effectively stopped evolving .NET Framework and put their energy into modern .NET. Meanwhile, we were still running on a foundation that wasn’t keeping pace with where things were going. So we made the call to move. This wasn’t a simple upgrade. We relied on parts of .NET Framework that don’t exist anymore. AppDomains. Binary serialization. A handful of “seemed like a great idea at the time” features that modern .NET intentionally left behind. We had to rethink and rebuild some pretty fundamental parts of the product. So what did all of this actually get us? We’re now on a modern, actively supported runtime. It’s easier for us to keep improving performance, adopt new capabilities, and evolve the platform without constantly working around legacy constraints. We also retired some very old pieces of the system along the way, which… felt pretty great 😅 And then there's performance. Microsoft has invested heavily at performance improvements in modern .NET, and we're seeing that surface in Storyline. We ran benchmarks across 18 Storyline projects, measuring open, save, and publish times. Every single project got faster with improvements ranging from 0.4% to nearly 30%. The larger the project, the larger the improvement. In the animated gif below, I put .NET Framework (left) head-to-head with modern .NET publishing the same course. Neither project was pre-published to warm the cache, and I even gave .NET Framework a slight head start by clicking Publish there first. The gif is sped up for easier viewing, but the result is real: modern .NET finishes publishing well before .NET Framework. Big credit to the team that pulled this off. This was deep, risky work in some of the most critical parts of the product. Curious to hear from folks here: If you're on the latest Storyline 360, have you noticed any performance improvements when opening, saving, or publishing your projects?43Views4likes0CommentsLocalization File Issue
I’m hoping someone can help me troubleshoot an issue I ran into with the localization tool. Last week, I used the localization feature to translate one of my courses into French. While working on the version saved to my desktop, I was able to switch back and forth between the English and French versions without any trouble. I published the course to Review 360 on Wednesday for feedback from my boss, and I didn’t return to update the file again until this morning. Now, the option to switch to the French version has completely disappeared. I’ve checked my desktop and our shared folders, but I can’t locate the French version anywhere. I’ve attached a screenshot of what I’m currently seeing in Storyline in case it helps diagnose the issue. If anyone has experienced this before or has ideas on how to recover the translated version, I’d really appreciate your help!122Views0likes2CommentsLayers Overlapping in Storyline360
On one of my courses, the base layer slide has 14 buttons, each which opens an associated layer. A few of those layers link to other layers with additional information. This has worked since the course was developed in 2023. When I reviewed it today, not so much. When I click each of the 14 buttons (without clicking links to sub-layers), it works fine. Each layer appears as it should. However, when I click a button on one of the sub-layers to direct it back to the base layer, the layers start stacking, as in the attached screen capture. This occurs on slides 1.1 and 1.2, buttons 2 (Trace to Risk Record) & 10 (Mitigated Probability of Occurrence). All layers have the "Hide other slide layers" box checked. Before I change my layout and make each layer its own slide, I'm hoping there's a quicker fix.360 images: no annotations?
It is bizarre to me that you can't add simple text objects to Storyline immersive training lessons. I'll add an example from something I developed using an older tool. In CenarioVR, adding a text annotation to a 360 image is only slightly more complex than adding text to a normal slide in Storyline. In SL, annotating a 360 image would seem to require my opening the image in Photoshop and adding it directly to the bitmap. It's pointless extra work for the developer and less flexible. Please, please tell me I'm missing something obvious and it's really easy to add text to the image.Solved18Views0likes4CommentsSeekbar functionality issues on slides with "Resume to Saved State" layers
Hi everyone, I’ve run into an issue with the seekbar behavior. I have a slide where the layers are set to Resume to Saved State. When I revisit the slide after moving forward in the project, the seekbar doesn't track correctly. If I try to move the seekbar, it seems to control the base layer objects instead of the content on the current layer. Has anyone encountered this "overlap" issue before? I’ve attached a .story file to show what’s happening. Please review the attached file regarding the tabbing and question screens. On the question screen, the layers are failing to maintain their 'visited' state upon revisiting the slide; instead, it only displays the base layer. I would appreciate your help in ensuring the previously viewed layers remain visible when returning to this screen. Please let me know the best way to fix this glitch so that the seekbar accurately controls the active layer when the slide is revisited.