instructional design
951 TopicsTeaser: Storyline "Chat To Animation"
š Big things are coming to Storyline 360. Last month at the Articuland Summit in Boston, our COO Brian Gil gave a sneak peek at something our teamās been quietly working on: AI-powered animations inside Storyline. We've been calling this feature "chat to animation" internally. The idea is simple but powerful: talk to Storyline's AI Assistant about how you want your slide to animate, and it helps bring your vision to life. The attached video shows a little preview of this feature in a fun "Feline Overlord" themed Storyline course. š±š On the first slide, I entered this prompt into the AI Assistant chat: āCan you suggest an animation scheme for this slide?ā It broke down the suggested animation effects for each object, then asked if I want to create a trigger for them. After responding, "yes", the AI Assistant wrote the JavaScript code to create the animations and automatically associated it with the "When timeline starts" event. It also surfaced a "Preview" button that jumps into Storyline's preview mode so I could see the animation in action. Notice that the AI Assistant window stays visible during preview. This means I can refine the animation while previewing to home in on the exact look and feel I want. In this case, I wanted to delay the slight "pulse" of the yellow next arrow to begin after the image and text animations completed, so I entered: āGreat! Can you delay the animation on the next arrow a bit so that it starts after the other objects have animated in?ā The AI revised the JavaScript in the trigger and immediately replayed the slide again so I could see the change and tweak further if needed. That ability to preview, refine, and replay instantly is what makes this experience feel so magical. If you'd like to see the published course in action, you can find it on Review 360: Chat To Animation Teaser | Review 360 This feature should land in private beta soon, and we'd love to get your feedback. If you want to help shape how this evolves, email beta@articulate.com to get started!424Views11likes3CommentsDesigning Immersive Phone Conversations in Storyline
Ever have two characters talk in a training module, but it still feels flat; even with speech bubbles, audio, and triggers? This (FREE) Storyline phone conversation template changes that. Whether you're designing for sales, compliance, healthcare, or support, it creates real, layered convos that feel like you're eavesdropping on a call. Animated phone effects Realistic voiceover dialogue Transparent APNG waveforms (way better than GIFs!) Custom triggers for pick-up/end call Clean, modern layout with animated text Watch how it works: https://www.youtube.com/watch?v=kMpUcYJRNnE Preview the demo: https://www.redesignedminds.com/Discuss/story.html Download it free: https://drive.google.com/file/d/19AvmE7q3PAUbXoNKIViQtPNqCwUoFDQW/view?usp=sharing If your training includes a conversation, this is how you bring it to life.649Views10likes14CommentsMeet Your New Teammate: First Impressions of Articulateās AI Assistant
Introduction: Why AI Built for eLearning Changes Everything AI is everywhere these days ā from writing emails to generating images, creating videos, and more. We all know tools like ChatGPT, Midjourney, DALLĀ·E, Grammarly, Synthesia, and plenty more. Theyāve quickly become part of our daily workflows ā or at least, they have in mine! But if youāve ever tried using these tools to help build an eLearning course, youāve probably noticed something⦠Theyāre smart ā but they donāt really get what we do. Thatās why I was both excited and curious when I heard that Articulate was introducing an AI Assistant, built right into Storyline and Rise. Finally, an AI tool designed specifically for instructional designers and eLearning developers. Iāve been working with Articulate tools for over 14 years, and like many of you, Iām always looking for ways to speed up my workflow without sacrificing creativity or quality. So the big question was: Could this AI Assistant actually help me design or improve my courses ā in a way that generic AI tools canāt? Spoiler alert: It can. And it did. This is the first post in a series where Iāll share how Articulateās AI Assistant is changing the way I approach course development ā making everyday tasks faster, smoother, and honestly, a bit more fun. So letās take a closer look at why having AI built specifically for eLearning really makes a difference. Why Use Articulateās AI Assistant Instead of Other AI Tools? Like many of you, Iāve used my fair share of AI tools ā from ChatGPT for brainstorming to DALLĀ·E for generating creative visuals. These tools are great, but theyāre generalists. They donāt know (or care) that Iām building an eLearning course. Thatās where Articulateās AI Assistant stands out. Itās designed inside Articulate Storyline and Rise, for people like us ā instructional designers, eLearning developers, and content creators. No copy-pasting between tools, no explaining to a chatbot what a "learning objective" is every single time. Hereās why I immediately saw the benefit of having AI built right into the tools I already use: It understands context. Youāre not starting from scratch with every prompt. The AI Assistant knows youāre working within slides, quizzes, scenarios, and learning objectives. It fits seamlessly into your workflow. No need to bounce between apps or worry about formatting. You stay in Storyline or Rise, focused on creating ā and the AI is right there when you need a boost. Itās tailored for eLearning tasks. Whether youāre drafting instructional text, generating quiz questions, or adjusting tone for different audiences, itās built to support tasks we handle every day. Other AI tools are powerful, but they werenāt made for eLearning. Articulateās AI Assistant feels like it was built by people who understand the little challenges that come with designing courses ā and that makes all the difference. What Impressed Me Right Away I went in with low expectations ā I mean, AI is cool, but itās not magic, right? Well, after just a few prompts, I found myself genuinely impressed. Articulateās AI Assistant is fast and simple. No manuals, no guesswork. You type, it helps. It felt less like learning a new feature and more like having a colleague nearby to bounce ideas off. Articulateās AI Assistant gets you moving. The hardest part of creating content is often just getting started. The AI Assistant hands you a decent draft so youāre not stuck wondering how to begin. From there, itās all about tweaking. Articulateās AI Assistant understands eLearning. This isnāt some generic writing tool ā it gets that youāre creating learning content. Whether itās suggesting learning objectives or drafting quiz questions, it speaks the language of eLearning. By the end of my first session, I realized this tool isnāt just about saving time ā itās about keeping me in that productive flow state. Less overthinking, more doing. Wrapping Up ā And Whatās Next After just a short time using Articulateās AI Assistant, I knew it was going to be part of my daily routine. Itās not here to replace creativity ā itās here to remove those little hurdles that can slow us down. No more blank slides. No more overthinking simple tasks. And the best part? Iām only scratching the surface. In my next post, Iāll show you how Iām using the AI Assistant to speed up writing ā from slide content to quizzes and even branching scenarios. Thatās where things get really interesting. Have you given the AI Assistant a try yet? Iād love to hear how itās working for you ā or if you're still wondering how to fit it into your workflow. Drop a comment below and letās share ideas! Stay tuned ā more AI-powered tips coming soon! About me: Paul Alders LinkedIn Profile The eLearning Brewery530Views9likes3CommentsCustomizable 3D Model Viewer in Storyline 360 Using Three.js
TLDR A 3D model viewer (GLB/GLTF and OBJ fiel formats) for Storyline 360 using Three.js. See the links below. Please check in the comments for additional updates. Updated (2025-03-08): Minor adjustments to model reloads and resource disposal. Use new project file. Demo: 3D Model Viewer https://360.articulate.com/review/content/87acc80c-2820-4182-b99b-db9e7fd60852/review Demo: 3D Model Display Customizer https://360.articulate.com/review/content/d740f374-c2ac-4ae0-92bc-255b7d35ee1a/review Introduction In my ongoing efforts to better my skills, Iāve been learning more about Three.js, a 3D animation and visualization library for the web. At its simplest, Three displays 3D models in a web browser. This is something that many Storyline users have expressed interest in but alas, it is not directly supported. Storyline actually does incorporate Three.js as its viewer for 360-degree images but otherwise does not include it as a part of its projects. This Project Since displaying graphics with Three is actually fairly easy, it seems like this is something that would have already been done. Since I couldnāt find much regarding integrating Three with Storyline however, I decided to use this as a test bed for my learning experience. As a disclaimer, I am not a programmer, so my code may be inelegant and inefficient in areas. I also know very little about 3D modeling. My design decisions were based upon what I found interesting or potentially useful and may overlook standard practices or some key features that one might expect. Feel free to comment. The Model Viewer I broke this project into two parts. Both are created within Storyline modules. One is a model viewer template that you can use to display two varieties of 3D model files (GL/GLTF and OBJ models). It accommodates a configuration option (list of variables in a trigger) which allows you to customize many aspects of how the model is displayed on your slide. You can include multiple models, each with their own configuration, and switch between them. The size of the model viewer matches the container object on your slide, so it can be sized to your needs. The template module is pretty simple, with a viewer container, some JavaScript, and a handful of triggers and variables. Iāve included the project file for the viewer. You should either be able to adapt it directly or incorporate its parts into your own projects. The Display Customizer The second part is another, more complicated Storyline module. This component can be used to customize how your model is visualized. I linked much of the viewerās functionality to a collection of Storyline controls, spread across multiple slide layers. Once you have your model setup and adjusted to your liking, you can export a block of JavaScript that represents all of the settings used in the viewerās configuration options. You will copy and paste this into one of your viewerās project triggers. Combined with your model files, this enables the 3D viewer to reproduce what you designed for display on your slide. Of course, you can also manually edit the configuration options if you desire, but for anything more than minor edits, this is far easier. Due to its complexity (4000+ lines of script and several hundred variables and triggers), I have not shared the project file. I will, however, leave an active link to the published module that you can use to set up your models. The Details (for anyone who cares) Inspiration for this project came from the following sources: https://threejs.org/docs/ https://threejs-journey.com/ https://github.com/donmccurdy/three-gltf-viewer https://community.articulate.com/discussions/discuss/drag-and-drop-objects-into-storyline/994181 https://github.com/Sphinxxxx/vanilla-picker Model Viewer The viewer module consists of A web object containing your model files and some JavaScript A viewer rectangle on your slide with its accessibility text set to āglbā A few variables A few triggers, including a main JavaScript routine and some configuration options The Web Object We will use a web object to include your model files and the Three.js base code into your project. While Storyline can load a version of Three when needed, it is older and lacks many of the additional resources we need. The script in the web is a custom bundle of the current Three components we need in this project, along the js-beautify library. The functions and classes are made available as global variables under window. Using a static version ensures that everything works together even if Three issues updates that break interactions. You also will include copies of your model resources. The configuration script specifies the base model files for the viewer. Additional files are typically referenced from within the base files. It is easiest if you create a folder for each model, and place all of the related files inside that folder, inside the web object folder. The viewer supports GLB, GLTF, and OBJ models. GLB models are typically one file with everything embedded. GLTF models often have additional texture files associated with it. Preserve any file structure that came with your model (i.e., If your textures are in their own folder, leave them there. If they are at the same level as the model file, leave them there. Also donāt change any of their names. You can rename the GLTF or GLB files and their containing folder, but they must match what is listed in the configuration script. OBJ models usually require several files. Include them all unless you know they are not needed. Final renders and reference images are not normally needed. As with GLB and GLTF, OBJ model files can be renamed but must match the configuration script. There is also an MTL file that should use the same name as the OBJ file (this allows the script to find this file). Donāt rename the texture files unless you know they need to be changed. Note: If you download models from places like CG Trader, Turbo Squid, or Sketchfab then sometimes the textures are separate from the models, or the filenames donāt match what is specified within the GLTF or MTL files. You may have to manually locate and/or rename the texture files. Sometimes you might need to edit the texture specs inside MTL files as well. If you make your own models, then Iāll assume you have what you need. You can also include optional environmental textures, which can provide lighting information and visual backgrounds within the viewer. These are supported as EXR, HDR, JPEG, PNG, and GIF files. If you include these, create a folder inside your main model folder called myEnvMaps and put the required environmental texture files inside this folder. Finally, add an empty text file to the web object folder and rename it to index.html. Once the web object folder is ready, add it to your project in scene 2. Note: Anytime you change the contents of the web object folder, you need to follow ALL of the steps below. Delete the existing web object Insert the new web object (browser to the folder, set to open in new window) Move web object to bottom of timeline list Publish the single slide in scene 2. Click the Load Now button to open the web object page Copy the portion of the URL text matching story_content/WebObjects/[random characters]/ Make sure to include the trailing ā/ā Paste this value into the dataFolder variable The Viewer Rectangle Create a rectangle. Fill and outline donāt matter, as it will be removed when published. Right-click on the shapeās timeline entry, select accessibility, and edit the text to read glb. You can change this value in the tagViewerContainer variable. This rectangle can be any size or shape and placed wherever on the slide. Variables and Triggers Make sure all of the variables listed in the viewer template project are included in your project. There is one trigger on the slide master. It loads the JavaScript for Three (from the web object). On the base slide, there is one trigger for the main JavaScript viewer routine. For each model you wish to display, there is one additional JavaScript trigger that sets the configuration options. You can generate new text for these triggers using the display customization module. Display Customizer The viewer has many options. Most are built into the Three objects used to display the model. A few are specific to this viewer implementation. You can manually edit the configuration trigger for each model if desired, changing values to fine tune your viewer. For large scale changes or initial setup, you might want to use the display customizer module (linked above). Loading Models The interface consists of a viewport on the left and various controls on the right. To load a model, you can drag and drop one or more files or a folder onto the viewport (or drop new files later to change models). The viewer will try to find and load the model and all of it associated textures and environment files. Dropping the files is convenient as an interface, but it requires extra processing to access the files. Since some of the model files can be large, it might take several seconds before everything gets loaded. Also keep in mind that all of the processing is done in the browser on your computer. If your machine is not very robust, then processing times may be longer. If in doubt, open and watch the browserās inspector panel console to see if there are errors related to loading the files; especially when trying to load new models. Sometimes you donāt have all the files you need or theyāre in the wrong folder. You will see what files the viewer is looking for, and if they are found. If unexpected problems occur, try reloading the browser window. Feel free to comment here if you discover recurrent error conditions. Base settings The base settings panel provides the main interface. You can see and control key aspects of lighting, as well as environmental, animation, and shadow conditions. You can also adjust the viewport aspect ratio in case you need something that is not square. Lighting Unless you set up an environment to provide illumination, you will need some lights to see your model. There are four types of lighting available. Ambient is equivalent to overhead sunlight. The other three types offer up to four light sources each. The controls show the colors. The corners control specific features (see the Help button for details). Right click on each square to get additional options. Each light type has its own options. There is a color picker to set your desired color. Making changes will be immediately visible in the viewport. If you canāt see a change, you may need to adjust the intensity or the positioning of the light. There is an option for a helper, which is a visual representation of the light shape and position. Turn this on to help you set up the lights. Syncing Lights Since the viewer offers the ability to orbit the camera around your model, lighting usually remains static in relation to your model (i.e., the model and lights appear to rotate together in the viewer). A custom feature in this implementation is the ability to sync your lights to the camera so they also move around your model, creating interesting effects. This can be enabled for each individual light, in two different sync styles. Lights may be made relative to the camera position, so they appear to remain in one place in 3D space. They may also be synced in the direction of the camera, at a fixed distance. This is similar to having a flashlight trained on your model as you view it. You can also specify whether each light will generate shadows. This can add realism to your displays. Shadows require significant processing, so use them sparingly to prevent laggy performance. Other Settings Other settings, including rotation direction and speed, environment controls, intensities, and animations are available. Animations seem to work with GLB/GLTF models. OBJ models do not support animation directly. Try out the various controls on your model to see the effects. Export Settings When you have set up your model as desired, you can use the Loader Settings button to export a copy of the current settings snapshot. These include the model filenames and camera positions (like a snapshot). Make sure things are in the position that you want them to start before you click the button. You will see a long list of settings that can be highlighted and copied. This will get pasted into the options trigger in the Model Viewer module. See the triggers attached to the example buttons in the demo file. You can also load and save copies of settings as session data in your browser. This could be useful if you have commonly used settings you want to reuse, or if you want to pick up where you left off on the previous day. Note, these are local to your machine and browser. They will not be available elsewhere. You can also Apply the loaded or the default settings to the current model if desired. The Defaults when Drop Loading checkbox indicates if newly dropped model files will use the current settings or the default start-up settings, in case you prefer one over the other. Technical Notes (thanks for reading this far) Loading Files The Model Viewer uses physical model files included with your published project. This increases your project size but improves the loading speed of the models on the slide. The Display Customizer uses a file drop mechanism to make it easier to switch between random models. This works by parsing through the files or folders dropped and converting them into URL blobs. The blobs act like internal web addresses pointing to each of your files. Large files, especially environment textures or complex models, may take a bit to fully process and load (the Burger model for example). When you utilize the Model viewer for your final product, performance should be better since you only need a single set of files, and they are included locally. You could potentially modify the Viewer script to allow for loading from external URLs rather than local files, but I have not done that yet. Environments Environment textures are 360-degree images, similar to what you use in Storyline. The format can be EXR, HDR, JPEG, PNG, or GIF. This only supports equirectangular, 2:1 images. EXR and HDR files tend to be very large, so keep that in mind. When using an environment Three infers lighting information from the selected image, making external lights unnecessary. If you want to use additional lights, you will need to lower the Environment Intensity setting so the lights donāt get washed out. The environment does not need to be visible to have an effect. If you want it visualized, then the image will replace the background color. Since the focus is really on your model, it is normal for the environment background to be somewhat blurred. Using low resolution images as textures will make this much more pronounced. If you wanted to have crisp images in the background, I believe you would need to modify the script to project the image onto a sphere instead, as you would when displaying 360-degree images (maybe Iāll add this later). OBJ Models My understanding is limited, but environments donāt project properly (or at all) onto imported OBJ models. You can display them, but they provide no lighting effects. Supposedly you can apply the environment textures to the meshes within the model, but I couldnāt get that to work. My approach, awkward but I like the outcome, is to replace all of the meshes in the loaded OBJ model with new meshes, apply the existing settings, and make some adjustments to shine and gloss settings on the fly. This results in a final model that responds to your environment lighting. I found that the test models I downloaded all seemed to come out super glossy. I added a few simple steps to calculate relative gloss levels between the model components and applied an overall adjustment to set it to a reasonable level. I was happy with the test results. Your mileage may vary. If your OBJ models donāt come out as you expected, you may need to adjust the MTL file to fine tune the output. Iāve also found that many OBJ model files (the MTL in particular) contain erroneous paths or incorrect textures assigned to material properties. If your model looks all white, black, grey or some odd color, check the MTL file (itās plain text) and verify the data. Fix any broken paths, check to see if textures are supposed to be in their own directory, and make sure the correct textures are assigned. Particularly the lines starting with āmap_ā. These assign the texture images to material properties. Look at the actual texture images, the MTL file, and the Wavefront reference linked below. Play around with different settings to see if you can get it to look like itās supposed to. See this link for more information: https://en.wikipedia.org/wiki/Wavefront_.obj_file. Lastly, the OBJ models donāt support animations like GLB/GLTF models. Even if your source says the model is animated, that may only apply to other model formats. You may be able to convert another animated version to a GLB format online, or by using Blender. Performance Remember that JavaScript runs on the userās machine. Everything that Three and this viewer script does happens locally. Donāt overburden your model viewer with an abundance of processing requirements if you donāt think the end usersā machine can handle it. Light syncing and shadow display require extra processing. If you use them, do so sparingly to make an impactful point. Not every light needs a shadow to look realistic. Also, only include the files that are really needed in the final product. Extra environment textures just take up room and slow down website loading times. Excessively high-resolution images do the same and may not be needed. Downloaded models may include extraneous files unrelated to display needs. If youāre not sure they are needed, try removing them and see if everything still works. Only include those that are required. Customization There is a Storyline variable in the project called viewer. This holds a reference to the model viewer. Many of the settings seen in the Display Customizer can be accessed and modified using this variable. If you desire, you could add your own script that loads this viewer object and allows you to directly change settings. Things like turning lights on or off, changing colors, changing positions, starting or stopping rotation or animation, and moving the camera are all easily modifiable, giving you extra control on how your model behaves. You will need to reference the configuration settings script, the main viewer JavaScript trigger, and the Three documentation website (linked above) to understand the possibilities. Limitations There are a lot of moving parts in this project (quite literally with the 3D models). The Display Customizer module is quite complicated, and building something like this inside Storyline is not recommended for the faint of heart. It required 4 weeks, on and off, from concept to this beta version. There are undoubtedly logic errors and code conflicts that I have not discovered yet. This project is provided as is, with no guarantees. If you experience an issue, post a comment. I will look into it eventually and fix it if I can. I may post updates if I fix bugs or add any features. The models included as examples were all available for free from the CG Trader, Turbo Squid, or Sketchfab websites. You can download others or use your own. I could not get FBX models to work properly and lost interest in them. THREE seems to work best with GLB/GLTF models. I like the OBJ models as well since they are easy to manually examine and modify. Additional FIles Web object used in project file (holds Three.js and the example models. https://paedagogus.org/3DModelViewer/Web_Object.zip Additional sample models to experiment with. https://paedagogus.org/3DModelViewer/Other_Sample_Models.zip1.5KViews8likes11CommentsDrawing Annotation on Storyline Slide
Demo: https://360.articulate.com/review/content/518383b2-1161-408d-b9f5-adb9a6f57a11/review Inspired by code discussed on: https://img.ly/blog/how-to-draw-on-an-image-with-javascript/ Using Charts in your Storyline | Articulate - Community About This is a brief example of using an HTML canvas element to enable annotation on a Storyline slide. The example displays an image on a slide, sets a few variables, and the accessibility tags of a some slide objects. It then triggers a JavaScript trigger to create a canvas over the image, and then watches the mouse buttons and movement to allow drawing on the canvas. How it works A canvas element is created, filled with the specified base image, and inserted below a small rectangle (canvasAnchor) that is the same width as and placed directly above the image. Another rectangle (canvasClickArea) overlays and is sized to match the image. This is the area that allows drawing on the canvas (where the mouse is watched). Brush width and color can be controlled. The drawing can be cleared. It also resizes with the slide. To improve The events to watch the mouse and clear button should be better handled to allow removal when a new canvas is created. A mechanism to allow a blank base (clear) should be reinstated. Right now it just relies on the use of the initial image from the slide. Both options have their uses. Since the canvas is a raster image, resizing to very small and then very large results in poor image quality. The image can be extracted from the canvas. This could be saved or printed. More drawing options are available with the canvas element. Credit: X-ray images from https://learningradiology.com/573Views8likes10CommentsHow are you approaching learning creation in your organization beyond ātraditionalā L&D use cases?
Hey ELH community š, We know that learning creation doesnāt live solely within L&D or instructional design teams. In large organizations especially, managers, training, enablement teams, and other departments are increasingly creating their own learning to meet team and business needs. Weāre curious how thatās playing out in your organization. If youāre in L&D, whatās holding you back from bringing on more teams create courses in Articulate? Are there particular challengesātechnical, process-related, or culturalāthat make it harder to open things up? And if you have scaled and democratized course creation with Articulate beyond L&D, whatās helped it work well? Weād love to learn from your experiences; whatās working, whatās not, and what would make it easier. ~ The Articulate Research Team398Views6likes6CommentsExpert Insight Needed!
Hi Everyone! I am a graduate student in an Instructional Design and Performance Technology program. In my Distance Learning Policy and Planning course, we are conducting an informal research investigation on current use of technology in our field. We are tasked with finding out what practitioners are using out in the real world, and how they feel about those technologies. Can you please share the platforms you use and your own personal feelings about these technologies (what works well, what is challenging, etc.) for purposes such as: Delivering instruction or training (such as an LMS) Communication and collaboration Assessments or testing Analytics Thank you so much for helping me learn from your experience!451Views6likes17CommentsCommunity Insights: What You Can Learn from David Taitās Career Pivot
One of the best things about creative careers is how flexible they areāyou can take them in so many directions. For DavidTaitā, that flexibility led from graphic design to learning design, and eventually to co-founding 4pt, a learning design studio. 4pt has been creating meaningful learning experiences for more than 16 years. In this Member Spotlight, you'll discover how adaptability, curiosity, and community shaped David's journey, and how to apply these lessons to your own career path. From Design to Learning āBefore starting my career in e-learning, I was a student focused on design,ā David says. āI spent four years studying design. Two in graphic design and two in newspaper, magazine, and infographic design. That background gave me a strong foundation in visual communication, which has been incredibly useful in my learning and development (L&D) work.ā While still in college, he took on a freelance project as a graphical user interface designer for the Northern College Network. āIt was my first real step into the world of digital learning design,ā he recalls. āIt helped me see how I could apply my design skills in a completely different context.ā Soon after, a former lecturer offered him a role at an e-learning startup creating online CPD courses for healthcare professionals. āWorking in a startup meant wearing many hats,ā David says. āThat experience really shaped my path and helped me see how my design skills could grow into a career in learning.ā š”Tip: Apply your existing creative skills to a small digital learning project (freelance, volunteer, or self-initiated). Hands-on experience helps bridge design and instructional work faster than theory alone. Turning Change into Opportunity A few years later, the company was acquired, and layoffs followed. āRather than seeing it as a setback, my studio manager and I took it as an opportunity,ā David says. āWhen we started 4pt, all of those responsibilities suddenly became our job. Being able to adapt to new challenges was essential, and itās a big reason why weāve been able to thrive.ā š”Tip: When your path shifts unexpectedly, use it to test new skills or partnerships. Career detours often reveal strengths you wouldnāt discover in a stable role. Finding Flexibility with Storyline āOne project in 2013 really shaped our company,ā David says. āA client asked us to build a course in Storyline 1. Weād never used it before, but rather than turn the work away, we invested in licenses and learned as we went.ā āBefore long, Storyline became the tool most of our clients wanted to use,ā he explains. āStoryline gave us the ability to solve problems ourselves, experiment more freely, and move much faster. That agility has stayed with us ever sinceāitās a core part of how we approach learning design.ā š”Tip: Donāt wait to feel like an expert. Pick a project, open the tool, and build. Use the community forums and shared files when you hit roadblocks. The Power of Community āIāve lost count of how many times Iāve hit a dead end in Storyline and found the solution on the forums,ā David says. āThat support has saved me countless hours and kept projects moving. The community around Articulate is unlike anything else.ā Over time, helping others became just as rewarding. āBeing part of E-Learning Heroes isnāt just about getting help,ā he adds. āItās about giving back. I try to pay it forward when I can, and that sense of community has been such a valuable part of my journey.ā š”Tip: When you find an answer in ELH, take a minute to thank the posterāor add your own version of the solution. Small interactions build visibility and confidence. Lessons from the Journey āFigure out where your limitations are, and then build a trusted network of professionals who can help you overcome them,ā David says. āContinuous learning is important, but you donāt have to master everything yourself.ā He also believes in stepping outside your comfort zone: āSometimes doing that sooner opens doors you didnāt even realize were there.ā āI try to focus on projects where I can see real value and impactāand to work with people I genuinely like and respect. That combination has made the journey far more meaningful.ā š”Tip: Find one collaborator who complements your skillsāa developer, writer, or media proāand trade knowledge. Collaboration accelerates growth and keeps learning fun. Looking Ahead These days, David is focused on advancing localization in his projects and exploring how AI fits into e-learning. āWeāre evaluating Storylineās new localization features ahead of a major project,ā David says. āIām excited to see how these tools evolve and how we can integrate them to deliver even better multilingual learning experiences.ā Heās also reading Co-Intelligence: Living and Working with AI by Ethan Mollick. āItās not written specifically for L&D, but itās helped me think more critically about how AI can be used thoughtfully and effectively.ā š”Tip: Keep one āoutside-the-industryā book on your reading list. Fresh perspectives often spark the most creative ideas. š¬ Your Turn Davidās story is a reminder that creativity, curiosity, and community can take your career in directions you never planned, but might love most. Whatās one skillāor momentāthatās shaped your own learning design journey? Share it in the comments below!301Views4likes8CommentsAI Voices
Just my two cents, AI Voices were good because we didn't need to go back to our live voice to get something redone, or if we wanted to update 1 slide or add something to a presentation. Now we are seeing voices being removed so the advantage of the AI voices is reduced. I see 2 posts this morning and it's not even lunch time from people that need to make updates to 1 or 2 slides, change a word, or add something and the voice is gone. Maybe we need to look at AI again.Solved209Views4likes11Comments