e-learning development
1449 Topics*ENTRIES ARE NOW CLOSED* GIVEAWAY: DevLearn is coming — want to join us? 🎉
DevLearn is almost here (Nov 12–14)! Whether you’re already going or hoping to, we want to celebrate with the learning community. To share the excitement, we're giving away TWO DevLearn 3-Day Conference passes valued at $2,395 (travel and accommodations not included, U.S. residents only). 👉 To enter: Comment below and tell us: 💬 Why you want to go and/or what you hope to learn! That’s it! We’ll randomly select two winners from the comments. ⏰ Enter by: Tuesday, October 28, 2025 8:00 AM PT 🏆 Winners announced: Wednesday, October 29, 2025 by 5:00 PM PT Already have your ticket? Drop a “👋 I’ll be there!” so we know who to connect with in Vegas! UPDATE: Thank you to everyone who shared their stories! We’ve loved reading your entries and seeing so many opportunities to collaborate and learn from this amazing community. 🎉 Congratulations to our winners: Kaitlyn-Skyline and larryvanwave-ff We’re so excited for you and hope to do more fun giveaways like this in the future. Thanks again to everyone who participated!1.6KViews19likes74CommentsRise Learning Journal / Notes
Update (04 June 2025) This project is now available open source at GitLab. I've included a quick glance at the License (Same as Moodle). License: GNU GPL v3 Type: Strong copyleft license Implications: ✅ You can use, modify, and distribute Rise Learning Journal freely. 🔁 If you distribute a modified version, you must also release the source code under the GPLv3. ❌ You cannot make it proprietary or incorporate it into closed-source software. ✅ You can use it commercially, but the GPL applies to any distribution. Instructions for implementation further down the page under the heading BETA Version Release. I've been working on a Learning Journal for Rise. I have an BETA version I'd like to share on SCORM Cloud. (trial account, limited to 10 users, ping me, mailto:notes@rebusmedia.com, if it's maxed out and I'll clear users). The features we have included so far are: Comments persisted between sessions (SCORM 1.2 & SCORM 2004 3rd and 4th Ed) Save comments, associated with blocks individual blocks Comments are organised into topics/pages Edit existing Comments Delete Comments Print comments (to printer, or PDF is you have the required software installed) Export comments to Word (*.doc) Pagination (If comments exceed a defined number, they are split into pages) Add the functionality to individual blocks, or globally. There are some things that needs to be finalised which will not take a great deal of work to complete. Mobile compatibility WCAG 2.2 AA What I'm looking for is a bit of community input, as I know people have wanted this feature for quite some time. This is my best guess of how somebody might use a learning journal, but would love to hear any other examples of how it could function, or additional useful features that could be included. If you would like to check it out on SCORM Cloud. You can visit this URL: Rise Learning Journal on SCORM Cloud (trial account, limited to 10 users, ping me, mailto:notes@rebusmedia.com, if it's maxed out and I'll clear users). Update (3rd December 2024) I have continued to work on this project and it is now SCORM 2004 compatible. Again, it is using the cmi.comments_from_learner. Unfortunately I found a significant issue with the Articulate implementation of the SCORM 1.2 and 2004 comments. I am in communication with support after logging the issue. I am hoping I can convince them that the implementation is incorrect, and the base script is updated. In the meantime, I am applying a patch to the Articulate "WriteComment" function to ensure comments are stored correctly for SCORM 1.2 and SCORM 2004. I have also made some cosmetic changes and updated the CSS for the HTML to ensure the application picks up the current Rise module theme (colours, fonts etc). I've fixed a few bugs I have found along the way with regards to deleting journal entries, and editing journal entries when not on the page they originated from. This all appears to be working really well now. My next priority will be working on the CSS to ensure it is mobile compatible. Once all of the HTML and CSS is finalised, I'll then work on the accessibility. I've been implementing aria attributes as I go along, but there is still some testing and development to be done on that side of things. I will be looking to release this as a BETA to a handful of people early in the new year. Update (9th December 2024) Accessibility Started work on accessibility. Currently implementing and then will be looking to test using JAWS 2024 and NVDA over the xmas holiday period. On track for BETA release Jan 2025. Update (09 January 2025) Accessibility & refactoring Still working on accessibility and refactoring. There is a little more work than first forecast. Yes, I know, you've never heard that from a developer before. I'm 50/50 as to whether I can get this out in January. It will depend on other work commitments, but I will keep this post updated. I have decided to simplify the colour scheme and move away from using the defined "branding" colours inherited from Rise, as I was finding this a bit unpredictable with colour contrast, and so in the interest of ensuring the content has the best colour contrast, I'll be hard coding the CSS rather than using the CSS variables defined in Rise. I'll re-visit this in future. Looking at the code, I need some serious refactoring as I think I found some redundancies and so need to delete any unused code that I added and then abandoned. Oh, and Happy New Year. Update (24 January 2025) Accessibility & refactoring Almost ready for BETA release. Should be ready for release next Tuesday. Accessibility just about completed. I think I could spend another few days improving things, but I think this will be a good first release. BETA Version Release Contact: notes@rebusmedia.com Minimum requirements: Rise course published SCORM 1.2 or 2004 (xAPI not currently supported) LMS Support for cmi.comments (TalentLMS cmi.comments implentation is not supported as the comments are not persisted between sessions). Release Notes: This is a BETA release, and is provided as is without any warranties and It should be used with caution and fully tested for your use case before considering for production. If you do find bugs, please report them to notes@rebusmedia.com (include browser, LMS, device) and I'll release a fix as quickly as possible. This is a side project and so does come second to our day job which can be busy, and so you may need a certain level of patience. Fixes can be expedited for your use case through engagement of our services for time critical projects. It has been tested on mobile, but not extensively (Google Pixel + iPhone). Win/Chrome has been the browser used for development, and testing has also been performed on Win/Firefox and Win/Edge. Features requests: If you require any features that deviate from the BETA version, they will be considered on their merit, but can only be guaranteed for your own implementation through engagement for our services. We have a long list of features that we would like to add if there is enough interest in the application and if it is viable. Accessibility: We made the decision to remove colors from the modal window theme to keep it simple and generic and accessible (high color contrast). The application has been tested with JAWS 2024 and is fully keyboard accessible and keeps assistive technology users informed of what is happening when interacting with the modal window. I'm always willing to make improvements to accessibility as a priority. Accessibility issues are treated as a bug and not a feature request. Implementation: Publish your Rise course to either SCORM 1.2 or 2004 Download the two files note.min.css and notes.min.js files to your computer. Extract your published Rise course to your computer and then copy the note.min.css and note.min.js files to the scormcontent\lib folder Open the scormcontent\index.html file in a simple text editor such as notepad and paste the following text just before the closing head element, which looks like this </head>. <link type="text/css" rel="stylesheet" href="lib/notes.min.css"> <script type="text/javascript" src="lib/notes.min.js" data-notes-per-page="5"></script> It will look something like this: // Excerpt of scormcontent/index.html starts window.__loadEntry = __loadEntry window.__loadRemoteEntry = __loadRemoteEntry window.__loadJsonp = __loadJsonp window.__resolveJsonp = __resolveJsonp window.__fetchCourse = __fetchCourse })() </script> <link type="text/css" rel="stylesheet" href="lib/notes.min.css"> <script type="text/javascript" src="lib/notes.min.js" data-notes-per-page="5"></script> </head> <body> <div id="app"></div> // Excerpt of scormcontent/index.html ends You can adjust the data-notes-per-page="5" attribute to determine how many notes should be listed in the viewer, before the pagination (note navigation) kicks in. Save the scormcontent/index.html file It's important to get this bit right, as the LMS expects the imsmanifest file in the root of the zip file you are about to create. Navigate to the folder containing imsmanifest.xml and then select all (CTL+A) and then select archive/zip/compress depending on the software you use the terminology can be different. It must be a zip file though and the imasmanifest.xml file must be in the root of the zip file. Update (28 January 2025) Print functionality improvement After some user feedback, I have adjust the print functionality so that there is less chance of the student losing the course window during printing. When print is completed or cancelled, the print page closes and the user is return to the course window. Update (30 January 2025) Fix: Added functionality to handle learn.riseusercontent.com cmi.comments implementation. The cmi.comments implementation is incorrect on the LMS and requires the application to retrieve all comments and save to the LMS rather than appending to existing comments. This could cause memory issues if users add multiple long comments over time. CSS: Improved CSS for mobile view (using full height of the screen to display the application. Update (31 January 2025) Bug: There is a known issue with TalentLMS. TalentLMS does not persist SCORM 1.2 cmi.comments between sessions. All comments are disregarded at the end of the session. For this reason, we cannot support TalentLMS unless TalentLMS changes the functionality of the SCORM 1.2 cmi.comments. CSS: Improved CSS for mobile view. Supporting devices with a minimum screen width of 355px wide. Update (07 March 2025) New configuration option: I have added a configuration option that allows you to determine where the note button should be inserted (instead of globally). In order to determine where the note button should be inserted, you need to follow these steps: Grab a copy of the latest version of the JS and CSS files. Wherever you would like to insert the note button, within the Rise authoring environment, simply add {RM.NOTES} to the top of the block, for example: Follow the Implementation instructions, outlined earlier in this post. When you come to add the script to the HTML file, you will need to add an extra data attribute to the <script> tag called data-notes-global and set the value as false. <script type="module" src="lib/notes.min.js" data-notes-global="false"></script> Update (12 March 2025) BETA Distribution files, including the README.MD document, are available to download. This will be the last feature addition for a while now. Bug fixes and stabilisation will continue, but any new features will have to wait or can be requested via notes@rebusmedia.com. Prompt You can now add a prompt to the note when defining a notes button using the {RM.NOTES} directive. The prompt is defined as a configuration option in the following way {RM.NOTES PROMPT="Prompt text goes here."}. It would look something like this in the Rise author environment. This would ensure that a notes button is inserted on this block, and when selected, will display the text input, preceded by the prompt "What should you include in your clinical notes?". In order to use the prompt, you must set the global flag to false using the <script> tag as follows: <script type="module" src="lib/notes.min.js" data-notes-global="false"></script> Note button position The note button default position is the top right of the target block. The button can now be positioned at the centre bottom of the target block. The position configuration can be used with the global flag set to true (buttons inserted automatically on blocks) or set to false (buttons only inserted where the {RM.NOTES} directive is present within the block. <script type="module" src="lib/notes.min.js" data-notes-button-centre-bottom="true"></script>2.3KViews13likes93CommentsDesigning Immersive Phone Conversations in Storyline
Ever have two characters talk in a training module, but it still feels flat; even with speech bubbles, audio, and triggers? This (FREE) Storyline phone conversation template changes that. Whether you're designing for sales, compliance, healthcare, or support, it creates real, layered convos that feel like you're eavesdropping on a call. Animated phone effects Realistic voiceover dialogue Transparent APNG waveforms (way better than GIFs!) Custom triggers for pick-up/end call Clean, modern layout with animated text Watch how it works: https://www.youtube.com/watch?v=kMpUcYJRNnE Preview the demo: https://www.redesignedminds.com/Discuss/story.html Download it free: https://drive.google.com/file/d/19AvmE7q3PAUbXoNKIViQtPNqCwUoFDQW/view?usp=sharing If your training includes a conversation, this is how you bring it to life.651Views10likes14CommentsHow I Built This: How I Vibe-Coded a People Manager Simulation
When the new Rise 360 Code Block (Beta) feature launched, I wanted to see just how far it could be pushed. Could you build something more than static content? That’s how the People Manager Simulation came to life – a fully playable, story-driven experience built entirely inside a single Rise code block, using only HTML, CSS, and JavaScript. In this video, I explain how it was created and how you can repurpose this approach in your own projects. Why I Made This In my day job, I design learning experiences for real teams, often around leadership, people management, and workplace decision-making. I wanted to create something that shows how these kinds of soft-skills topics can be transformed into immersive simulations without needing heavy development tools. The result is a game where you step into the shoes of a brand-new team leader, navigating real-world decisions that impact morale, performance, retention, and stress. Each choice has a trade-off, and yes, you can get “sacked” if you mismanage your stats. In the video, I mention that this project was built gradually, late evenings, after work, once my son was asleep. There were plenty of failed tests, odd bugs, and “why won’t this work” moments along the way. I did consider going back and documenting every single prompt and adjustment… but honestly, that would read like an increasingly impatient diary of me negotiating with ChatGPT! So instead, I wanted to share a simpler, more practical way for you to repurpose what already works. How I Built It Rather than starting from scratch, the method I show in the walkthrough involves: Uploading the existing working code of the simulation. Giving ChatGPT a single clear prompt that explains: This is for Rise 360’s custom code block. It should learn the structure and logic of the original simulation. It should rewrite the theme, dialogue, and characters for a new scenario. In the video, I demonstrate how to use the current People Manager Simulation code as context; use the download attached below. 📁 Download: People Manager Simulation HTML; attached below. You then give this to your LLM of choice as an attachment and provide your repurposing prompt; the one I used can also be downloaded below. 📁 Download: GPT Prompt for Repurposing Existing Demo; attached below. Key Takeaways Start from a working simulation instead of a blank page. Use a single, focused prompt to repurpose the entire code and story. Attach your full code as context so the model understands structure and logic. Re-use this workflow to adapt learning scenarios quickly—no coding expertise required. The Result Here’s the outcome of my own repurposing test from the walkthrough: a completely new narrative built using the same base code and single prompt. Is it perfect? No. But it’s a solid foundation—and all this came together in about ten minutes. 📁 Download: The Result — Full New HTML Code; attached below. Final Thought The best part of this approach is accessibility: you don’t need to be a coder to build something that feels custom. By starting with a working framework and iterating through clear, focused prompts, you can turn any learning scenario into a playable, data-driven experience. Whether it’s leadership, compliance, or customer service, this structure gives you the foundation to explore how choices shape outcomes, all inside Rise 360. My final ask is: please repurpose and improve on any of the ideas shared in this article. Let me and the wider community know how you get on. 💬 Ask Me Anything! I’d love to hear your feedback and answer any questions about the build. Drop your thoughts in the comments below—I’ll be checking in and responding! Want to Share Your Build? Do you have a project you’d love to share with the community? We’re always looking for more How I Built This stories. Whether it’s a game, interaction, or unique design, we’d love to feature your process. Drop a note in the comments or reach out to the community team if you’re interested!693Views9likes5CommentsStoryline 360 Pros — What’s Your Favorite “Hidden Gem”? 💎
As someone who’s spent a lot of time working with (and on!) Storyline 360, I’ve come to appreciate the power in the little things — those lesser-known features that quietly make our lives easier. Here's one of my personal favorites: 🎧📽️ Cue Points with the “C” Key: I recently spoke with a customer who struggled to time trigger actions to audio and video media on their slides. They would preview the slide, make note of when a trigger should be fired, then return to slide authoring view to add a cue point to the timeline to tie into the trigger event. This would require a lot of manual back-and-forth between authoring and previewing. I often have to do the same thing, and there is an easier way. If you use stage preview (accessible via the "Play" icon" in the lower-left corner of the Timeline panel), Storyline will stay in the slide authoring view and play the timeline of the slide, including any audio or video media that's present. As it plays, you can press the "C" key on your keyboard to have cue points added to the current playback position. It’s a simple way to place cue points in real time, right where they’re needed — perfect for syncing trigger actions to specific moments in your media. cting Storyline 360's UI and using the "C" key to drop cue points on the timeline. Now I’m curious: What’s your favorite under-the-radar Storyline feature? Something small, subtle, maybe even a little obscure — but that you personally couldn’t live without. Drop it in the comments — I’d love to learn what little gems you rely on. 👇1.4KViews9likes28CommentsMeet Your New Teammate: First Impressions of Articulate’s AI Assistant
Introduction: Why AI Built for eLearning Changes Everything AI is everywhere these days — from writing emails to generating images, creating videos, and more. We all know tools like ChatGPT, Midjourney, DALL·E, Grammarly, Synthesia, and plenty more. They’ve quickly become part of our daily workflows — or at least, they have in mine! But if you’ve ever tried using these tools to help build an eLearning course, you’ve probably noticed something… They’re smart — but they don’t really get what we do. That’s why I was both excited and curious when I heard that Articulate was introducing an AI Assistant, built right into Storyline and Rise. Finally, an AI tool designed specifically for instructional designers and eLearning developers. I’ve been working with Articulate tools for over 14 years, and like many of you, I’m always looking for ways to speed up my workflow without sacrificing creativity or quality. So the big question was: Could this AI Assistant actually help me design or improve my courses — in a way that generic AI tools can’t? Spoiler alert: It can. And it did. This is the first post in a series where I’ll share how Articulate’s AI Assistant is changing the way I approach course development — making everyday tasks faster, smoother, and honestly, a bit more fun. So let’s take a closer look at why having AI built specifically for eLearning really makes a difference. Why Use Articulate’s AI Assistant Instead of Other AI Tools? Like many of you, I’ve used my fair share of AI tools — from ChatGPT for brainstorming to DALL·E for generating creative visuals. These tools are great, but they’re generalists. They don’t know (or care) that I’m building an eLearning course. That’s where Articulate’s AI Assistant stands out. It’s designed inside Articulate Storyline and Rise, for people like us — instructional designers, eLearning developers, and content creators. No copy-pasting between tools, no explaining to a chatbot what a "learning objective" is every single time. Here’s why I immediately saw the benefit of having AI built right into the tools I already use: It understands context. You’re not starting from scratch with every prompt. The AI Assistant knows you’re working within slides, quizzes, scenarios, and learning objectives. It fits seamlessly into your workflow. No need to bounce between apps or worry about formatting. You stay in Storyline or Rise, focused on creating — and the AI is right there when you need a boost. It’s tailored for eLearning tasks. Whether you’re drafting instructional text, generating quiz questions, or adjusting tone for different audiences, it’s built to support tasks we handle every day. Other AI tools are powerful, but they weren’t made for eLearning. Articulate’s AI Assistant feels like it was built by people who understand the little challenges that come with designing courses — and that makes all the difference. What Impressed Me Right Away I went in with low expectations — I mean, AI is cool, but it’s not magic, right? Well, after just a few prompts, I found myself genuinely impressed. Articulate’s AI Assistant is fast and simple. No manuals, no guesswork. You type, it helps. It felt less like learning a new feature and more like having a colleague nearby to bounce ideas off. Articulate’s AI Assistant gets you moving. The hardest part of creating content is often just getting started. The AI Assistant hands you a decent draft so you’re not stuck wondering how to begin. From there, it’s all about tweaking. Articulate’s AI Assistant understands eLearning. This isn’t some generic writing tool — it gets that you’re creating learning content. Whether it’s suggesting learning objectives or drafting quiz questions, it speaks the language of eLearning. By the end of my first session, I realized this tool isn’t just about saving time — it’s about keeping me in that productive flow state. Less overthinking, more doing. Wrapping Up — And What’s Next After just a short time using Articulate’s AI Assistant, I knew it was going to be part of my daily routine. It’s not here to replace creativity — it’s here to remove those little hurdles that can slow us down. No more blank slides. No more overthinking simple tasks. And the best part? I’m only scratching the surface. In my next post, I’ll show you how I’m using the AI Assistant to speed up writing — from slide content to quizzes and even branching scenarios. That’s where things get really interesting. Have you given the AI Assistant a try yet? I’d love to hear how it’s working for you — or if you're still wondering how to fit it into your workflow. Drop a comment below and let’s share ideas! Stay tuned — more AI-powered tips coming soon! About me: Paul Alders LinkedIn Profile The eLearning Brewery535Views9likes3CommentsCustomizable 3D Model Viewer in Storyline 360 Using Three.js
TLDR A 3D model viewer (GLB/GLTF and OBJ fiel formats) for Storyline 360 using Three.js. See the links below. Please check in the comments for additional updates. Updated (2025-03-08): Minor adjustments to model reloads and resource disposal. Use new project file. Demo: 3D Model Viewer https://360.articulate.com/review/content/87acc80c-2820-4182-b99b-db9e7fd60852/review Demo: 3D Model Display Customizer https://360.articulate.com/review/content/d740f374-c2ac-4ae0-92bc-255b7d35ee1a/review Introduction In my ongoing efforts to better my skills, I’ve been learning more about Three.js, a 3D animation and visualization library for the web. At its simplest, Three displays 3D models in a web browser. This is something that many Storyline users have expressed interest in but alas, it is not directly supported. Storyline actually does incorporate Three.js as its viewer for 360-degree images but otherwise does not include it as a part of its projects. This Project Since displaying graphics with Three is actually fairly easy, it seems like this is something that would have already been done. Since I couldn’t find much regarding integrating Three with Storyline however, I decided to use this as a test bed for my learning experience. As a disclaimer, I am not a programmer, so my code may be inelegant and inefficient in areas. I also know very little about 3D modeling. My design decisions were based upon what I found interesting or potentially useful and may overlook standard practices or some key features that one might expect. Feel free to comment. The Model Viewer I broke this project into two parts. Both are created within Storyline modules. One is a model viewer template that you can use to display two varieties of 3D model files (GL/GLTF and OBJ models). It accommodates a configuration option (list of variables in a trigger) which allows you to customize many aspects of how the model is displayed on your slide. You can include multiple models, each with their own configuration, and switch between them. The size of the model viewer matches the container object on your slide, so it can be sized to your needs. The template module is pretty simple, with a viewer container, some JavaScript, and a handful of triggers and variables. I’ve included the project file for the viewer. You should either be able to adapt it directly or incorporate its parts into your own projects. The Display Customizer The second part is another, more complicated Storyline module. This component can be used to customize how your model is visualized. I linked much of the viewer’s functionality to a collection of Storyline controls, spread across multiple slide layers. Once you have your model setup and adjusted to your liking, you can export a block of JavaScript that represents all of the settings used in the viewer’s configuration options. You will copy and paste this into one of your viewer’s project triggers. Combined with your model files, this enables the 3D viewer to reproduce what you designed for display on your slide. Of course, you can also manually edit the configuration options if you desire, but for anything more than minor edits, this is far easier. Due to its complexity (4000+ lines of script and several hundred variables and triggers), I have not shared the project file. I will, however, leave an active link to the published module that you can use to set up your models. The Details (for anyone who cares) Inspiration for this project came from the following sources: https://threejs.org/docs/ https://threejs-journey.com/ https://github.com/donmccurdy/three-gltf-viewer https://community.articulate.com/discussions/discuss/drag-and-drop-objects-into-storyline/994181 https://github.com/Sphinxxxx/vanilla-picker Model Viewer The viewer module consists of A web object containing your model files and some JavaScript A viewer rectangle on your slide with its accessibility text set to “glb” A few variables A few triggers, including a main JavaScript routine and some configuration options The Web Object We will use a web object to include your model files and the Three.js base code into your project. While Storyline can load a version of Three when needed, it is older and lacks many of the additional resources we need. The script in the web is a custom bundle of the current Three components we need in this project, along the js-beautify library. The functions and classes are made available as global variables under window. Using a static version ensures that everything works together even if Three issues updates that break interactions. You also will include copies of your model resources. The configuration script specifies the base model files for the viewer. Additional files are typically referenced from within the base files. It is easiest if you create a folder for each model, and place all of the related files inside that folder, inside the web object folder. The viewer supports GLB, GLTF, and OBJ models. GLB models are typically one file with everything embedded. GLTF models often have additional texture files associated with it. Preserve any file structure that came with your model (i.e., If your textures are in their own folder, leave them there. If they are at the same level as the model file, leave them there. Also don’t change any of their names. You can rename the GLTF or GLB files and their containing folder, but they must match what is listed in the configuration script. OBJ models usually require several files. Include them all unless you know they are not needed. Final renders and reference images are not normally needed. As with GLB and GLTF, OBJ model files can be renamed but must match the configuration script. There is also an MTL file that should use the same name as the OBJ file (this allows the script to find this file). Don’t rename the texture files unless you know they need to be changed. Note: If you download models from places like CG Trader, Turbo Squid, or Sketchfab then sometimes the textures are separate from the models, or the filenames don’t match what is specified within the GLTF or MTL files. You may have to manually locate and/or rename the texture files. Sometimes you might need to edit the texture specs inside MTL files as well. If you make your own models, then I’ll assume you have what you need. You can also include optional environmental textures, which can provide lighting information and visual backgrounds within the viewer. These are supported as EXR, HDR, JPEG, PNG, and GIF files. If you include these, create a folder inside your main model folder called myEnvMaps and put the required environmental texture files inside this folder. Finally, add an empty text file to the web object folder and rename it to index.html. Once the web object folder is ready, add it to your project in scene 2. Note: Anytime you change the contents of the web object folder, you need to follow ALL of the steps below. Delete the existing web object Insert the new web object (browser to the folder, set to open in new window) Move web object to bottom of timeline list Publish the single slide in scene 2. Click the Load Now button to open the web object page Copy the portion of the URL text matching story_content/WebObjects/[random characters]/ Make sure to include the trailing “/” Paste this value into the dataFolder variable The Viewer Rectangle Create a rectangle. Fill and outline don’t matter, as it will be removed when published. Right-click on the shape’s timeline entry, select accessibility, and edit the text to read glb. You can change this value in the tagViewerContainer variable. This rectangle can be any size or shape and placed wherever on the slide. Variables and Triggers Make sure all of the variables listed in the viewer template project are included in your project. There is one trigger on the slide master. It loads the JavaScript for Three (from the web object). On the base slide, there is one trigger for the main JavaScript viewer routine. For each model you wish to display, there is one additional JavaScript trigger that sets the configuration options. You can generate new text for these triggers using the display customization module. Display Customizer The viewer has many options. Most are built into the Three objects used to display the model. A few are specific to this viewer implementation. You can manually edit the configuration trigger for each model if desired, changing values to fine tune your viewer. For large scale changes or initial setup, you might want to use the display customizer module (linked above). Loading Models The interface consists of a viewport on the left and various controls on the right. To load a model, you can drag and drop one or more files or a folder onto the viewport (or drop new files later to change models). The viewer will try to find and load the model and all of it associated textures and environment files. Dropping the files is convenient as an interface, but it requires extra processing to access the files. Since some of the model files can be large, it might take several seconds before everything gets loaded. Also keep in mind that all of the processing is done in the browser on your computer. If your machine is not very robust, then processing times may be longer. If in doubt, open and watch the browser’s inspector panel console to see if there are errors related to loading the files; especially when trying to load new models. Sometimes you don’t have all the files you need or they’re in the wrong folder. You will see what files the viewer is looking for, and if they are found. If unexpected problems occur, try reloading the browser window. Feel free to comment here if you discover recurrent error conditions. Base settings The base settings panel provides the main interface. You can see and control key aspects of lighting, as well as environmental, animation, and shadow conditions. You can also adjust the viewport aspect ratio in case you need something that is not square. Lighting Unless you set up an environment to provide illumination, you will need some lights to see your model. There are four types of lighting available. Ambient is equivalent to overhead sunlight. The other three types offer up to four light sources each. The controls show the colors. The corners control specific features (see the Help button for details). Right click on each square to get additional options. Each light type has its own options. There is a color picker to set your desired color. Making changes will be immediately visible in the viewport. If you can’t see a change, you may need to adjust the intensity or the positioning of the light. There is an option for a helper, which is a visual representation of the light shape and position. Turn this on to help you set up the lights. Syncing Lights Since the viewer offers the ability to orbit the camera around your model, lighting usually remains static in relation to your model (i.e., the model and lights appear to rotate together in the viewer). A custom feature in this implementation is the ability to sync your lights to the camera so they also move around your model, creating interesting effects. This can be enabled for each individual light, in two different sync styles. Lights may be made relative to the camera position, so they appear to remain in one place in 3D space. They may also be synced in the direction of the camera, at a fixed distance. This is similar to having a flashlight trained on your model as you view it. You can also specify whether each light will generate shadows. This can add realism to your displays. Shadows require significant processing, so use them sparingly to prevent laggy performance. Other Settings Other settings, including rotation direction and speed, environment controls, intensities, and animations are available. Animations seem to work with GLB/GLTF models. OBJ models do not support animation directly. Try out the various controls on your model to see the effects. Export Settings When you have set up your model as desired, you can use the Loader Settings button to export a copy of the current settings snapshot. These include the model filenames and camera positions (like a snapshot). Make sure things are in the position that you want them to start before you click the button. You will see a long list of settings that can be highlighted and copied. This will get pasted into the options trigger in the Model Viewer module. See the triggers attached to the example buttons in the demo file. You can also load and save copies of settings as session data in your browser. This could be useful if you have commonly used settings you want to reuse, or if you want to pick up where you left off on the previous day. Note, these are local to your machine and browser. They will not be available elsewhere. You can also Apply the loaded or the default settings to the current model if desired. The Defaults when Drop Loading checkbox indicates if newly dropped model files will use the current settings or the default start-up settings, in case you prefer one over the other. Technical Notes (thanks for reading this far) Loading Files The Model Viewer uses physical model files included with your published project. This increases your project size but improves the loading speed of the models on the slide. The Display Customizer uses a file drop mechanism to make it easier to switch between random models. This works by parsing through the files or folders dropped and converting them into URL blobs. The blobs act like internal web addresses pointing to each of your files. Large files, especially environment textures or complex models, may take a bit to fully process and load (the Burger model for example). When you utilize the Model viewer for your final product, performance should be better since you only need a single set of files, and they are included locally. You could potentially modify the Viewer script to allow for loading from external URLs rather than local files, but I have not done that yet. Environments Environment textures are 360-degree images, similar to what you use in Storyline. The format can be EXR, HDR, JPEG, PNG, or GIF. This only supports equirectangular, 2:1 images. EXR and HDR files tend to be very large, so keep that in mind. When using an environment Three infers lighting information from the selected image, making external lights unnecessary. If you want to use additional lights, you will need to lower the Environment Intensity setting so the lights don’t get washed out. The environment does not need to be visible to have an effect. If you want it visualized, then the image will replace the background color. Since the focus is really on your model, it is normal for the environment background to be somewhat blurred. Using low resolution images as textures will make this much more pronounced. If you wanted to have crisp images in the background, I believe you would need to modify the script to project the image onto a sphere instead, as you would when displaying 360-degree images (maybe I’ll add this later). OBJ Models My understanding is limited, but environments don’t project properly (or at all) onto imported OBJ models. You can display them, but they provide no lighting effects. Supposedly you can apply the environment textures to the meshes within the model, but I couldn’t get that to work. My approach, awkward but I like the outcome, is to replace all of the meshes in the loaded OBJ model with new meshes, apply the existing settings, and make some adjustments to shine and gloss settings on the fly. This results in a final model that responds to your environment lighting. I found that the test models I downloaded all seemed to come out super glossy. I added a few simple steps to calculate relative gloss levels between the model components and applied an overall adjustment to set it to a reasonable level. I was happy with the test results. Your mileage may vary. If your OBJ models don’t come out as you expected, you may need to adjust the MTL file to fine tune the output. I’ve also found that many OBJ model files (the MTL in particular) contain erroneous paths or incorrect textures assigned to material properties. If your model looks all white, black, grey or some odd color, check the MTL file (it’s plain text) and verify the data. Fix any broken paths, check to see if textures are supposed to be in their own directory, and make sure the correct textures are assigned. Particularly the lines starting with “map_”. These assign the texture images to material properties. Look at the actual texture images, the MTL file, and the Wavefront reference linked below. Play around with different settings to see if you can get it to look like it’s supposed to. See this link for more information: https://en.wikipedia.org/wiki/Wavefront_.obj_file. Lastly, the OBJ models don’t support animations like GLB/GLTF models. Even if your source says the model is animated, that may only apply to other model formats. You may be able to convert another animated version to a GLB format online, or by using Blender. Performance Remember that JavaScript runs on the user’s machine. Everything that Three and this viewer script does happens locally. Don’t overburden your model viewer with an abundance of processing requirements if you don’t think the end users’ machine can handle it. Light syncing and shadow display require extra processing. If you use them, do so sparingly to make an impactful point. Not every light needs a shadow to look realistic. Also, only include the files that are really needed in the final product. Extra environment textures just take up room and slow down website loading times. Excessively high-resolution images do the same and may not be needed. Downloaded models may include extraneous files unrelated to display needs. If you’re not sure they are needed, try removing them and see if everything still works. Only include those that are required. Customization There is a Storyline variable in the project called viewer. This holds a reference to the model viewer. Many of the settings seen in the Display Customizer can be accessed and modified using this variable. If you desire, you could add your own script that loads this viewer object and allows you to directly change settings. Things like turning lights on or off, changing colors, changing positions, starting or stopping rotation or animation, and moving the camera are all easily modifiable, giving you extra control on how your model behaves. You will need to reference the configuration settings script, the main viewer JavaScript trigger, and the Three documentation website (linked above) to understand the possibilities. Limitations There are a lot of moving parts in this project (quite literally with the 3D models). The Display Customizer module is quite complicated, and building something like this inside Storyline is not recommended for the faint of heart. It required 4 weeks, on and off, from concept to this beta version. There are undoubtedly logic errors and code conflicts that I have not discovered yet. This project is provided as is, with no guarantees. If you experience an issue, post a comment. I will look into it eventually and fix it if I can. I may post updates if I fix bugs or add any features. The models included as examples were all available for free from the CG Trader, Turbo Squid, or Sketchfab websites. You can download others or use your own. I could not get FBX models to work properly and lost interest in them. THREE seems to work best with GLB/GLTF models. I like the OBJ models as well since they are easy to manually examine and modify. Additional FIles Web object used in project file (holds Three.js and the example models. https://paedagogus.org/3DModelViewer/Web_Object.zip Additional sample models to experiment with. https://paedagogus.org/3DModelViewer/Other_Sample_Models.zip1.5KViews8likes11CommentsTranslation / localization
Hi, We currently have our course in English only, but more and more we get requests to translate the content. The content is mostly text and speech (generated with the text to speech feature). I know about the feature on how to export, translate and import again, but now with AI and LLM and translation tools like DeepL is there a smarter and easier way to do it than duplicating slides and courses in different languages? Anyone with ideas, experience or suggestions? I am happy to hear what you think.409Views8likes13Comments