ai
39 TopicsShow Us Your AI Makeover!
Whether you joined the AI Assistant: Beyond the Basics webinar or are just starting to explore what the AI Assistant can do, this challenge is for you. In the session, we shared ways to go beyond quick drafts, using AI to help with the trickier parts of course creation, like writing questions, refining lessons, generating images, or even creating scenarios. Now it’s your turn to experiment and share what you’ve built. 💡 Show us your “AI Makeover” Post a quick before-and-after example of how you used AI Assistant to transform your content. You could share: A short “before” snippet — like SME notes, a few bullet points, a slide, or a paragraph of draft text The “after” — what AI helped you build from it (for example, an outline, lesson, quiz, or visual) A quick note about how you refined or customized what AI created ✨ Or just join the conversation: What parts of your workflow feel easiest to improve with AI right now? Where are you still experimenting or getting stuck? Have you discovered any prompt tricks or creative uses worth sharing? Let’s use this thread to keep building on what was learned in the session and learn from each other’s experiments along the way.108Views1like2CommentsAI Assistant: Producing Highly Realistic Audio
As a course author, you want to do more than just present information—you want to create multi-sensory e-learning experiences that resonate with learners. Using sound creatively can help you get there. AI Assistant’s text-to-speech and sound effects features let you create highly realistic AI-generated voices and sound effects for more immersive and accessible content. Originally, both of these features could only be accessed in Storyline 360. However, as of the July 2025 update, AI Assistant in Rise 360 can generate text-to-speech narration. Visit this user guide to get started creating AI-generated narrations in Rise 360. In Storyline 360, these features can be accessed from the Insert Audio dropdown in the AI Assistant menu within the ribbon. Find them under the Home or Insert tab when you’re in slide view or from the AI Assistant side panel as quick action buttons for added convenience. Bring Narration to Life with AI-generated Voices If you’ve ever used classic text-to-speech, you probably wished the voices sounded less, well, robotic. AI Assistant’s text-to-speech brings narration to life with contextually aware AI-generated voices that sound more natural—and human! Check out the difference in quality between a standard voice, neural voice, and AI-generated voice by playing the text-to-speech examples below. Standard Voice Your browser does not support the audio element. Neural Voice Your browser does not support the audio element. AI-generated Voice Your browser does not support the audio element. To get started, click the Insert Audio icon in the AI Assistant menu to open the Generate AI Audio dialog box. A library of AI-generated voices—which you can filter by Gender, Age, and Accent—displays under the Voices tab. The voices also have descriptions like “deep,” “confident,” “crisp,” “intense,” and “soothing” and categories that can help you determine their ideal use cases, from news broadcasts to meditation, or even ASMR. Find these qualities under the voice’s name, and use the play button to preview the voice. Currently, there are 52 pre-made voices to choose from, and you can mark your favorites by clicking the heart icon. This way, you can easily access your preferred voices without having to scroll through the list. Toggle the View option to Favorites to find all your favorite voices, or In project to see voices used in the current project. Once you’ve decided on a voice, click the button labeled Use to switch to the Text-to-Speech tab. Your chosen voice is already pre-selected. Next, enter your script in the text box provided or click the add from slide notes link to copy notes from your slide. The script can be a maximum of 5,000 characters. For accessibility, leave the Generate closed captions box checked—AI Assistant will generate closed captions automatically. You can instantly determine if your text-to-speech narration has closed captions by the CC label that appears next to each output. Find More Voices in the Voice Library In addition to the pre-made voices, you also have access to an extended voice library with thousands of ultrarealistic, AI-generated voices that can be filtered by age, gender, and use case. Discover the right voice for your content by clicking the Voice Library button on the right under the My Voices tab. Check out this article to learn how to use the voice library. Note: Pre-made voices might lose compatibility and may not work well with newer voice models. Voices in the voice library might also disappear after their removal notice period ends or if the creator chooses to remove them. If this occurs, your generated narration will still play but can no longer be edited. Switch to a different voice if you need to modify an existing narration created with a voice that’s no longer supported. Adjust the Voice Settings Unlike classic text-to-speech, the AI-generated voices in AI Assistant’s text-to-speech can be customized for a tailored voice performance. The Model setting lets you choose from three different options: v3 (beta) - Most expressive, high emotional range, and contextual understanding in over 70 languages. Allows a maximum of 3,000 characters. Note that this model is actively being developed and is available only in Rise 360. Functionalities might change, or you might encounter unexpected behavior as we continue to improve it. For best results, check out some prompting techniques below. Multilingual v2 (default model) - Highly stable and exceptionally accurate lifelike speech with support for 29 languages. Allows a maximum of 10,000 characters. Flash v2.5 - Slightly less stable, but can generate faster with support for 32 languages. Allows a maximum of 40,000 characters. Pro tip: Some voices sound better with certain models, and some models perform better in specific languages. Experiment with different combinations to find what works best. For example, the Matilda voice sounds more natural in Spanish with the Multilingual v2 model than with v3. The setting for Stability controls the balance between the voice’s steadiness and randomness. The Similarity setting determines how closely the AI should adhere to the original voice when attempting to replicate it. The defaults are set to 0.50 for the stability slider and 0.75 for the similarity slider, but you can play around with these settings to find the right balance for your content. Additional settings include Style exaggeration, which amplifies the style of the original voice, and Speaker boost, which enhances the similarity between synthesized speech and the voice. Note that if either of those settings is adjusted, generating your speech will take longer. Note: Some voices in the Multilingual v2 model tend to have inconsistent volume—fading out toward the end—when generating lengthy clips. This is a known issue with the underlying model, and our AI subprocessor for text-to-speech is working to address it. In the meantime, we suggest the following workarounds: Use a different voice Switch to the Flash v2.5 model Increase the voice’s stability Manually break your text into smaller chunks to generate shorter clips Do I Need to Use SSML? AI Assistant has limited support for speech synthesis markup language (SSML) because AI-generated voices are designed to understand the relationship between words and adjust delivery accordingly. If you need to manually control pacing, you can add a pause. The most consistent way to do that is by inserting the syntax <break time="1.5s" /> into your script. This creates an exact and natural pause in the speech. For example: With their keen senses <break time="1.5s" /> cats are skilled hunters. Use seconds to describe a break of up to three seconds in length. You can try a simple dash - or em-dash — to insert a brief pause or multiple dashes for a longer pause. Ellipsis ... will also sometimes work to add a pause between words. However, these options may not work consistently, so we recommend using the syntax above for consistency. Just keep in mind that an excessive number of break tags can potentially cause instability. Prompting Techniques for v3 (beta) The v3 (beta) model introduces emotional control via audio tags, enabling voices to laugh, whisper, be sarcastic, or show curiosity, among other options. The following table lists various tags you can use to control vocal delivery and emotional expression, as well as to add background sounds and effects. It also includes some experimental tags for creative uses. Voice and emotion Sounds and effects Experimental [laughs], [laughs harder], [starts laughing], [wheezing] [whispers] [sighs], [exhales] [sarcastic], [curious], [excited], [crying], [snorts], [mischievously] Example: [whispers] Don’t look now, but I think they heard us. [gunshot], [applause], [clapping], [explosion] [swallows], [gulps] Example: [applause] Well, that went better than expected. [explosion] Never mind. [strong X accent] (replace X with desired accent) [sings], [woo] Example: [strong French accent] Zat is not what I ‘ad in mind, non non non. Aside from the audio tags, punctuation also impacts delivery. Ellipses (...) add pauses, capitalization emphasizes specific words or phrases, and standard punctuation mimics natural speech rhythm. For example: “It was VERY successful! … [starts laughing] Can you believe it?” Tips: Use audio tags that match the voice’s personality. A calm, meditative voice won’t shout, and a high-energy voice won’t whisper convincingly. Very short prompts can lead to inconsistent results. For more consistent, focused output, we suggest prompts over 250 characters. Some experimental tags may be less consistent across voices. Test thoroughly before use. Combine multiple tags for complex emotional delivery. Try different combinations to find what works best for your selected voice. The above list is simply a starting point; more effective tags may exist. Experiment with combining emotional states and actions to find what works best for your use case. Use natural speech, proper punctuation, and clear emotional cues to get the best results. Multilingual Voices Expand Your Reach Another compelling benefit of AI-generated text-to-speech is the ability to bridge language gaps, allowing you to connect with international audiences. With support for over 70 languages depending on the model—including some with multiple accents and dialects—AI Assistant’s text-to-speech helps your content resonate with a global audience. All you have to do is type or paste your script in the supported language you want AI Assistant to use. (Even though the voice description notes a specific accent or language, AI Assistant will generate the narration in the language used in your script.) Note that some voices tend to work best with certain accents or languages, so feel free to experiment with different voices to find the best fit for your needs. The table below provides a quick rundown of supported languages. Available in v3 (beta), Multilingual v2, and Flash v2.5: Arabic (Saudi Arabia) Arabic (UAE) Bulgarian Chinese Croatian Czech Danish Dutch English (Australia) English (Canada) English (UK) English (USA) Filipino Finnish French (Canada) French (France) German Greek Hindi Indonesian Italian Japanese Korean Malay Polish Portuguese (Brazil) Portuguese (Portugal) Romanian Russian Slovak Spanish (Mexico) Spanish (Spain) Swedish Tamil Turkish Ukrainian Available in v3 (beta) and Flash v2.5: Hungarian Norwegian Vietnamese Available only in v3 (beta): Afrikaans (afr) Armenian (hye) Assamese (asm) Azerbaijani (aze) Belarusian (bel) Bengali (ben) Bosnian (bos) Catalan (cat) Cebuano (ceb) Chichewa (nya) Estonian (est) Galician (glg) Georgian (kat) Gujarati (guj) Hausa (hau) Hebrew (heb) Icelandic (isl) Irish (gle) Javanese (jav) Kannada (kan) Kazakh (kaz) Kirghiz (kir) Latvian (lav) Lingala (lin) Lithuanian (lit) Luxembourgish (ltz) Macedonian (mkd) Malayalam (mal) Mandarin Chinese (cmn) Marathi (mar) Nepali (nep) Pashto (pus) Persian (fas) Punjabi (pan) Serbian (srp) Sindhi (snd) Slovenian (slv) Somali (som) Swahili (swa) Telugu (tel) Thai (tha) Urdu (urd) Welsh (cym) Create Sound Effects Using Prompts Sound effects that align with your theme and content can highlight important actions or feedback, like clicking a button or choosing a correct answer, offering a more engaging and effective e-learning experience. With AI Assistant’s sound effects, you can now use prompts to easily create nearly any sound imaginable. No more wasting time scouring the web for pre-made sounds that may cost extra! Start creating high-quality sound effects by going to the AI Assistant menu in the ribbon under the Home or Insert tab. Then, click the lower half of the Insert Audio icon, and choose Sound Effects. (You can also access it from the Audio dropdown within the Insert tab. Simply select Sound Effects under the AI Audio option.) In the text box, describe the sound effect you want and choose a duration. You can adjust the Prompt influence slider to give AI Assistant more or less creative license in generating the sound. Since AI Assistant understands natural language, sound effects can be created using anything from a simple prompt like “a single mouse click” to a very complex one that describes multiple sounds or a sequence of sounds in a specific order. Just note you have a maximum of 450 characters to describe the sound you want to generate. Play the following audio samples to listen to sound effects created using a simple prompt and a complex one. Your browser does not support the audio element. Prompt: A single mouse click Your browser does not support the audio element. Prompt: Dogs barking, then lightning strikes You can also adjust the Duration—how long the sound effect plays—up to a maximum of 22 seconds. For example, if your prompt is “barking dog” and you set the duration to 10 seconds, you’ll get continuous barking, but a duration of two seconds is one quick bark. Adjusting the Prompt Influence slider to the right makes AI Assistant strictly adhere to your prompt, while sliding it to the left allows more free interpretation. Pro tip: You can instantly determine if your sound effect has closed captions by the CC label that appears next to each output. Some Pro Terms to Know Using audio terminology—specialized vocabulary that audio experts use in their work—can help improve your prompts and produce even more dynamic sound effects. Here are a few examples: Braam: A deep, resonant, and often distorted bass sound used in media, particularly in trailers, to create a sense of tension, power, or impending doom. Whoosh: A quick, swooshing sound often used to emphasize fast motion, transitions, or dramatic moments. Impact: A sharp, striking noise used to signify a collision, hit, or sudden forceful contact, often to highlight a moment of action or emphasis. Glitch: A short, jarring, and usually digital noise that mimics a malfunction or distortion, commonly used to convey errors. Foley: The process of recreating and recording everyday sound effects like movements and object sounds in sync with the visuals of a film, videos, or other media. Here’s something fun to try! Generate a 3-second sound effect using the prompt “studio quality, sound designed whoosh and braam impact.” Increasing the duration may produce better sound effects but will also create more dead air towards the end. Pro tip: Onomatopoeias—words like “buzz,” “boom,” “click,” and “pop” that imitate natural sounds—are also important sound effects terms. Use them in your prompts to create more realistic sound effects. Video Tutorials Want to learn more before getting started? Check out our video tutorials for additional guidance on using AI Assistant to generate text-to-speech and sound effects. Create AI-generated Text-to-Speech Create AI-generated Sound Effects Articulate 360 Training also has additional video tutorials on using other AI Assistant features. Use AI Assistant features in Rise 360 Use AI Assistant features in Storyline 360 You must be logged in to your Articulate 360 account to watch the videos. Don’t have an account yet? Sign up for a free trial now!20KViews14likes0CommentsAn AI-Powered Knowledge Check in Storyline
I've been wrestling with this challenge for a while: How do we get learners to reflect without relying on quiz after quiz? How can we use open-ended questions to encourage deeper thought? I've long considered AI for this, but there were hurdles... How do you integrate it into an Articulate Storyline course without paying for tokens or setting up contracts? And how do you do it without leaking credentials in the course itself? Can it be done without having to modify code after exporting the course? I learned recently that Hugging Face Transformers provide a solution. You can now download an AI model to the learner's machine and run it locally in their browser. I've managed to get this running reliably in Storyline, and you don't have to modify the code after export! In the final slide of the demo, your goal is to recall as much as possible from the podcast/summary. The AI will then check your response and give you a percentage score based on what you remembered. Live demo & tutorial here: https://insertknowledge.com/building-an-ai-powered-knowledge-check-in-storyline/ If you want to learn how I recommend starting with the sentiment analysis because it's easier to get started. I've also provided a file to download in that tutorial if you want to reverse engineer it.AI Analysis
Not Articulate-related, but I’m analyzing some data (learner survey responses) using AI (Copilot and ChatGPT). The issue is that it is a real struggle to get the same format for every document, although the data docs themselves are all in the same format. Does anyone have any advice for this? P.S.: I tried asking Copilot about this, but the solution it gave didn’t work. Thanks!6Views0likes0CommentsAI in E-Learning: Opportunities and Innovation in Instructional Design
Our new micro e-learning course dives into the top 3 questions shaping the future of AI in instructional design. Hear expert audio insights, explore real-world examples and discover practical ways to bring AI into your own projects. 👉 Click the link below to start learning and unlock new possibilities with AI: https://www.swiftelearningservices.com/ai-in-e-learning/71Views1like0CommentsAI Assistant in Rise: Show & Tell!
Whether you joined the Overview of AI Assistant in Rise session or you’re just starting to explore the feature, this thread is for you! It’s Show & Tell time — we want to see what happens when you put AI Assistant to work in Rise. ✅ You can: Share what you made (bonus points for screenshots!) Describe your favorite AI prompt or surprise moment Ask others how they’d approach something differently 💡 Not sure where to start? Try one of these: Rewrite an intro paragraph block in a new tone or voice Use AI Assistant to generate an image for your course Use Instant Convert to transform an existing block into something new Ask AI to brainstorm scenarios for your learners 🎯 Want a little extra challenge? Create a “before and after” — show how AI Assistant transformed a block, image, or section of your course. 👉 What did you build (or discover) with AI Assistant in Rise? Drop your Show & Tell below!318Views1like6CommentsIntuitive Role Playing Exercise with Feedback
Hello, is there an AI tool within Storyline or Rise where you can insert an intuitive back-and-forth role-playing activity that provides real-time feedback to users depending on their responses to help enhance communication skills during customer service calls?110Views1like6CommentsCharacter Expression/Pose Series Using Photo Characters?
I have a common problem in many of my e-learning projects. I want to have a photographic character, say a man at a desk on the phone, looking happy. I then want a version of him with an angry expression. I then want a version of him with an exhausted expression. I want to choose the age, race, etc. In addition, I want to be able to easily change the background, etc. Trying to find this kind of a character series has been difficult. I have resorted to finding stock video and then grabbing frames out of the video that represent the expressions. But I cannot adjust the backgrounds very easily. Does anyone know of an AI tool where you can create a photographic character, and then show the character in multiple poses/expressions/etc. Thanks for any advice!141Views2likes8CommentsAI Won’t Replace Skill - This Will!
We’re in an era where anyone can make studio-level visuals in minutes. But real creators know the secret isn’t AI alone, it’s how you fuse it with your own craft. In my latest YouTube video, I walk through how I designed Kai inside a 3D scene, a 3D low-poly character for an eLearning scene, combining Blender modeling with AI tools like Seedream and Gemini. The result? A branded, professional-grade animation that stays original and on-style. If you’re a learning experience designer, 3D artist, or just exploring how to use AI effectively, this one’s for you. Watch the full video on YouTube and see how to stay creative in the AI era.23Views0likes0Comments