ai assistant
402 TopicsIntroducing AI Chat + JavaScript Entrance Animations
Now you can chat with AI to generate simple JavaScript animations for your slide, making it easier to bring your ideas to life. Update to the latest version to give it a try and let us know what you think! To help you get started, we have a video walkthrough, a tutorial course, and documentation to learn more about how it works, see sample prompts, and animation examples that can all be found here. Why JavaScript? This update is our first step in helping authors experiment and push creative boundaries. Based on your feedback, we’re already working on support for built-in animations—and we’ll be sharing a preview soon! We’re also exploring more ways AI can handle tedious tasks for you, so let us know what you’d like AI Assistant to be able to do for you! Try It Out! Use AI Chat to make an object fly in from the right side of the screen after one second. Feel free to get creative—frisbee flying across a park? Plane flying across a cloudy sky? Sky's the limit (see what we did there?)! Share your example in the comments below!366Views3likes9CommentsPronunciation in AI Audio (TTS)
I am hoping someone can clarify the use of SSML with Eleven Labs voices in Storyline 360. I've read variously on E-Learning Heroes that SSML is not supported with Eleven Labs voices or that it is supported but only for <phoneme> and <break> or it's supported for other tags too. I've been trying to use it with the phoneme tag but when I generate speech, it simply skips over the tagged content. This is a sample of one use I'm trying to get to work: Repair of this injury can be addressed laparoscopically in stable patients. The first priority is adequate exposure. Additional ports should be placed as needed to isolate the injury and improve visualization. If the bowel is friable it can be handled indirectly using <speak><phoneme alphabet="ipa" ph="æ.tɹ.ə.mˈæɾɪk">atraumatic</phoneme></speak> graspers on the mesentery, and the edges of an enterotomy can be approximated to limit contamination if the injury is small. When generated, all the speech between the opening <speak> and close </speak> tag is simply skipped but the rest of the speech is rendered. Since I am not sure if SSML is even supported, I'm not sure if I'm not rendering the tags correctly, or if it simply isn't supported.Solved35Views0likes2CommentsProblem with automatically generated subtitles ?
Hi everyone. I've detected a problem with the voices generated by Storyline's AI and the subtitles that are automatically generated at the same time. I copy and paste a prompt and check the “Generate subtitles” box. Everything is perfect. Of course, the subtitles are a mess, as usual. It takes me a long time to cut and arrange them perfectly. Then I decide to change (for example) a single word in my prompt. To preserve my work on the subtitles, I uncheck the “Generate subtitles” box. As a result, the subtitles are still completely recreated, once again in complete disarray. Is there something I'm not understanding (between “Generate subtitles” and NOT “generate subtitles”)? Should I do something else (other than exporting my subtitles and reimporting them)? This seems like a bug, doesn't it? Has anyone else noticed this? (I haven't found any previous posts on the subject). It seems to me that it worked logically and normally in a previous version: when I uncheck the “Generate” box, the subtitles that have already been created are not damaged. Am I wrong? Have you noticed the same thing? Any ideas or tips? Thanks Thierry138Views0likes7CommentsArticulate 360 User Guides are Moving!
Hey, everyone! As we get ready to say goodbye to 2025 and hello to 2026, we're setting you up for even greater success in the new year. In the next few weeks, we’re moving the user guides you know and love in ELH over to our other documentation database in Product Support! That means you’ll soon have a “one-stop shop" for all of your Articulate 360 documentation needs. This change will also improve the accuracy and responsiveness of Artie, our AI support agent. (Did you know you can ask Artie to find documentation for you instead of searching?) Unifying our databases enables us to analyze usage more deeply so we can better tailor our documentation to meet your needs. Plus, it gives us more options for translating user guides into other languages to serve our global community. Once everything has been transferred, the directories for the individual user guides you depend on (and may have bookmarked), like those for Storyline 360 and Rise 360, will remain available in ELH for your convenience. When you follow the links to the individual articles, they’ll take you to the Product Support knowledge base. We'll have landing pages for each Articulate 360 product there as well. We have a few more things to put in place before flipping the switch, but the transition should be seamless for you when it happens in the next few weeks. Let us know if you have any concerns or questions in the meantime.53Views2likes1CommentFeature/Verbesserung: Benennung von KI-Audiodateien
Hallo Zusammen, es wurde im Workflow ein Verbesserungswunsch geäußert: Die KI Audio-Funktion, generiert eine Audio mit der Bezeichnung der ersten Satzes, wie z.B.: "Dies ist der erste Satz der gesprochen wurde.mp3". Für die Übersichtlichkeit benennen wir die Audios in der Medienbibliothek um und dann heißt es, nachdem die Audios generiert wurden, als Beispiel: Audio_1.mp3, Audio_2.mp3, usw. ... Wenn aber Korrekturen gemacht werden müssen und der Satz neu-eingesprochen wird ändert sich die Benennung und aktualisiert es auf die Audio-Bezeichnung den es generiert hat. Kann man diesen Schritt beibehalten und die Audio-Bezeichnung gleichbleiben lassen, nach dem man es einmal umbenannt hat? Oder noch besser: Lässt sich die Benennung von Vornhinein ändern und beibehalten? Vielen Dank im Voraus!9Views0likes0CommentsAI Assistant in Rise 360: AI-Generated Text-to-Speech
Bring your Rise 360 course content to life with highly realistic and customizable AI-generated narration—without the time, expense, and hassle of recording. Just write or generate your script, define voice settings, and let AI Assistant do the rest. Ready to give your content a voice? Keep reading to find out how. Generate Text-to-Speech Narration Write Scripts With AI Script Writer Explore and Manage Voices FAQs Generate Text-to-Speech Narration To add AI-generated text-to-speech to any block that supports audio, follow one of these methods: For new blocks: Click AI Audio from the blocks shortcut bar at the bottom. Or, open the block library and select AI Blocks. Then, click Generate AI Audio. For existing blocks: Hover over the block and choose Content (pencil icon) from the floating toolbar on the left. Then, click the Add audio icon. Or, choose AI Block Tools (sparkle icon) from the toolbar and then click Add AI Audio. In a custom block, click the Audio tab from the sidebar menu on the left. The Course media window opens to the AI audio tab. Generate text-to-speech narration from here by following these steps: Click the drop-down menu for Voice, then click on a voice to select it. Click the View all voices link right underneath to view available voices in the Explore tab. Once you’ve selected a voice, click the Preview button on the right to listen to a quick preview. Enter your script into the text field—up to a maximum of 40,000 characters depending on the model—in any of these supported languages. If you're adding audio to a block that contains text, you can bring that text into your script by clicking Insert block text. No script to work with? Let AI Assistant write a draft. Reveal additional voice settings by clicking Advanced settings. Click the Generate speech button at the bottom once you’re ready to generate audio. While AI Assistant generates the audio, a cancel option shows on the right side. Clicking it cancels the generation process. When finished, the generated output is displayed on the right. From there, you can do any of the following: Listen to the generated audio before inserting it into your course by clicking the play icon. Review previously generated audio in the History list by clicking it, which also repopulates the related script and voice settings. Insert the selected output into your course by clicking the Insert button. Edit or Replace Text-to-Speech Narration To edit or replace existing text-to-speech narration, hover over the block, choose Content from the floating toolbar, and then click the Replace audio icon. Or click AI Block Tools from the toolbar, then choose Edit AI Audio. You can also click the three dots (Options menu) beside the audio player, then click Edit audio. In a custom block, right-click the audio element and select Replace with AI Audio. When the Course media window displays, you can do any of the following: Select a different voice. Update your script and voice settings. Generate a completely new text-to-speech narration. Edit or Toggle the Audio Transcript Your AI-generated audio includes a transcript for enhanced accessibility. You can manually edit the transcript or toggle its visibility on or off. Here’s how: Hover over the block and click Content from the floating toolbar on the left. Click Edit transcript (looks like a paper with a speaker icon). Or, click the three dots to the right of the audio player, then click Edit transcript. When the Edit transcript window appears, you can do any of the following: Listen to the generated audio. Toggle the transcript’s visibility. Manually edit the transcript. Delete Text-to-Speech Narration Deleting a text-to-speech narration only takes a few clicks. Just hover over the block, choose Content from the floating toolbar on the left, and then click the Remove audio icon. Alternatively, click the three dots beside the audio player, then choose Remove audio. Deleting it from an audio block is even simpler—just delete the entire block by hovering over it and clicking the trash icon that appears on the right. Write Scripts With AI Script Writer If you don’t have a script ready, you can prompt AI Assistant to write and refine a draft by following these steps: Click the Write with AI button in the upper right of the text entry field. Enter your instructions into the prompt box or choose from a few pre-made prompts. Once your script is generated, click the Edit with AI button to open the editing menu. One-click options are available, including Improve script writing, Fix spelling and grammar, Change tone, and more. For specific editing instructions, enter your text into the prompt box. Generated scripts can also be edited manually by clicking into the text entry field. Explore and Manage Voices We’ve made it easy to explore and manage voices in one place. Get started by clicking the View all voices link just below the Voice dropdown menu in the AI audio tab of the Course media window. This opens the Voices window, where you can find two tabs—Explore and Favorites. Explore Here’s where you can browse the voice library, including a few curated picks we think you’ll love. Each card displays the voice’s name, description, and the number of users who have added the voice to their own libraries. Click the play icon on each card to preview a voice. Add a voice to your favorites by clicking the heart icon, which changes its state from outlined to solid. Clicking again reverts the heart to its outlined state and removes the voice from your favorites. Recommended This section showcases 22 of the top voices in the voice library that we’ve handpicked for their standout quality. Click Show more on the right to reveal the rest of the cards. Voice Library Browse more than 5,000 high-quality voices, with new ones added regularly. Just scroll down the page to load more cards into view. Search specific voices by entering text into the search box at the top. You can search voices by name, keyword, or description. The voice library uses a fuzzy search technique—finding results that are similar to, but not necessarily an exact match for, the given search term. Narrow your search by clicking the Filter button to sort or filter voices. Select Sort by from the drop-down menu to reorder the list by Trending, Latest, Most users, or Most characters generated. Choose any filter—Use case, Category, Gender, or Age—by marking the checkbox for it. Applied filters appear as buttons at the bottom of the search box. To remove a filter, click the X icon inside the button. The table below provides a list of available options for each filter. Use case Dialogue, Characters & Animation, Social Media, Narrative & Story, Entertainment & Tv, Advertisement, Information & Education Category High Quality, Professional Gender Male, Female, Neutral Age Young, Middle-aged, Old Once you find the voice you want, you can use it right away or add it to your favorites. You can add up to 10,000 voices to your favorites. If you hit the 10,000 max limit, the heart icon will become grayed out. To free up some space, click the heart icon on a voice you’ve favorited. Click the Use voice button when you’re ready to generate narration using the selected voice. Clicking the Cancel button takes you back to the AI audio tab with no voice selected. Favorites In the Favorites tab, browse and use the voices you’ve added from the voice library. Here, you can do any of these actions: Click the play icon on each card to preview a voice. Use the search box or the filter option to find a specific voice. Select a voice, then click the Use voice button to generate text-to-speech narration with that voice. Remove a voice from your favorites by clicking the heart icon in the lower right of the card. FAQs Are saved voices in Rise 360's voice library also visible in Storyline 360's voice library? Voices added to favorites appear in both apps since they share the same voice library. How many voices are available in the voice library? More than 5,000 voices are available in the voice library, with new voices added regularly. Will learners be able to adjust the playback speed or download the audio generated by AI text-to-speech? Learners can change the playback speed of the AI-generated audio; however, they can’t download it. Can I upload my own custom voice to be used for text-to-speech narration? This feature is not currently supported, but you're welcome to submit a feature request. Will the AI-generated narration automatically update when the course content is translated? Yes. AI-generated narrations translate automatically during course localization. Can I add background music or sound effects to AI-generated speech in Rise 360, or is it limited to narration only? AI-generated text-to-speech in Rise 360 only supports narration for now, but we'd love to hear your ideas—please feel free to submit a feature request. What happens to the AI-generated audio when I convert my block? Converting blocks with AI Assistant doesn’t affect AI audio, except in the following cases, where the audio gets removed: Converting from one block type to another that doesn’t support audio. Converting blocks with a single content item to blocks with multiple items, and vice versa. For example, from a paragraph to a timeline block, or from a timeline to a paragraph block. What happens to my AI-generated audio if my Articulate 360 subscription expires or I lose access to AI features—will narration be preserved or removed? Your AI-generated audio will be saved when your subscription ends, subject to our content retention policy. However, once you lose access to AI features, you won’t be able to edit the AI-generated audio anymore, and it will become a regular audio file.4.2KViews0likes0CommentsAI Assistant: Producing Highly Realistic Audio
As a course author, you want to do more than just present information—you want to create multi-sensory e-learning experiences that resonate with learners. Using sound creatively can help you get there. AI Assistant’s text-to-speech and sound effects features let you create highly realistic AI-generated voices and sound effects for more immersive and accessible content. Originally, both of these features could only be accessed in Storyline 360. However, as of the July 2025 update, AI Assistant in Rise 360 can generate text-to-speech narration. Visit this user guide to get started creating AI-generated narrations in Rise 360. In Storyline 360, these features can be accessed from the Insert Audio dropdown in the AI Assistant menu within the ribbon. Find them under the Home or Insert tab when you’re in slide view or chat with AI Assistant in the side panel for added convenience. Bring Narration to Life with AI-generated Voices If you’ve ever used classic text-to-speech, you probably wished the voices sounded less, well, robotic. AI Assistant’s text-to-speech brings narration to life with contextually aware AI-generated voices that sound more natural—and human! Check out the difference in quality between a standard voice, neural voice, and AI-generated voice by playing the text-to-speech examples below. Standard Voice Your browser does not support the audio element. Neural Voice Your browser does not support the audio element. AI-generated Voice Your browser does not support the audio element. To get started in Storyline 360, click the Insert Audio icon in the AI Assistant menu to open the Generate AI Audio dialog box. A library of AI-generated voices—which you can filter by Gender, Age, and Accent—displays under the Voices tab. The voices also have descriptions like “deep,” “confident,” “crisp,” “intense,” and “soothing” and categories that can help you determine their ideal use cases, from news broadcasts to meditation, or even ASMR. Find these qualities under the voice’s name, and use the play button to preview the voice. Toggle the View option to Favorites to find all your favorite voices, or In project to see voices used in the current project. Once you’ve decided on a voice, click the button labeled Use to switch to the Text-to-Speech tab. Your chosen voice is already pre-selected. Next, enter your script in the text box provided or click the add from slide notes link to copy notes from your slide. For accessibility, leave the Generate closed captions box checked—AI Assistant will generate closed captions automatically. You can instantly determine if your text-to-speech narration has closed captions by the CC label that appears next to each output. In Rise 360, insert an AI Audio block to open the Course media window. Under Voice in the AI audio tab, click the drop-down menu and select a voice from the Recommended list. Click the View all voices link right underneath to explore more voices in the voice library. Once you’ve selected a voice, enter your script in the text box or click insert block text if you’re adding audio to an existing block with text. Currently in both apps, there are 52 pre-made voices to choose from—as listed in the table below—and you can mark your favorites by clicking the heart icon. This way, you can easily access your preferred voices without having to scroll through the list. Note that voices labeled as ”Legacy” won’t be updated when future AI models improve. In Rise 360, pre-made voices can be identified by the absence of the open book icon on their voice cards. Pre-made voices (non-legacy) Pre-made voices (legacy) Alice Bill Brian Callum Charlie Chris Clyde Daniel Eric George Harry Jessica Laura Liam Lily Matilda Rachel River Roger Sarah Thomas Will Adam Antoni Aria Arnold Charlotte Dave Domi Dorothy Drew Elli Emily Ethan Fin Freya Gigi Giovanni Glinda Grace James Jeremy Jessie Joseph Josh Michael Mimi Nicole Patrick Paul Sam Serena Find More Voices in the Voice Library In addition to the pre-made voices, you also have access to an extended voice library with thousands of ultrarealistic, AI-generated voices that can be filtered by age, gender, and use case. Discover the right voice for your content in the voice library by checking out the following user guides. Voice library in Rise 360 Voice library in Storyline 360 Note: Voices in the voice library might disappear after their removal notice period ends or if the creator chooses to remove them. (Rise 360 doesn’t display the removal notice period in the voice cards.) If this occurs, your generated narration will still play but can no longer be edited. Switch to a different voice if you need to modify an existing narration created with a voice that’s no longer supported. Adjust the Voice Settings Unlike classic text-to-speech, the AI-generated voices in AI Assistant’s text-to-speech can be customized for a tailored voice performance. The Model setting lets you choose from three different options: v3 (beta) - Most expressive, high emotional range, and contextual understanding in over 70 languages. Allows a maximum of 3,000 characters. Note that this model is actively being developed. Functionalities might change, or you might encounter unexpected behavior as we continue to improve it. For best results, check out some prompting techniques below. Multilingual v2 (default model) - Highly stable and exceptionally accurate lifelike speech with support for 29 languages. Allows a maximum of 10,000 characters. Flash v2.5 - Slightly less stable, but can generate faster with support for 32 languages. Allows a maximum of 40,000 characters. Pro tip: Some voices sound better with certain models, and some models perform better in specific languages. Experiment with different combinations to find what works best. For example, the Matilda voice sounds more natural in Spanish with the Multilingual v2 model than with v3. The setting for Stability controls the balance between the voice’s steadiness and randomness. The Similarity setting determines how closely the AI should adhere to the original voice when attempting to replicate it. The defaults are set to 0.50 for the stability slider and 0.75 for the similarity slider, but you can play around with these settings to find the right balance for your content. Additional settings include Style exaggeration, which amplifies the style of the original voice, and Speaker boost, which enhances the similarity between synthesized speech and the voice. Note that if either of those settings is adjusted, generating your speech will take longer. Note: Some voices in the Multilingual v2 model tend to have inconsistent volume—fading out toward the end—when generating lengthy clips. This is a known issue with the underlying model, and our AI subprocessor for text-to-speech is working to address it. In the meantime, we suggest the following workarounds: Use a different voice Switch to the Flash v2.5 model Increase the voice’s stability Manually break your text into smaller chunks to generate shorter clips Do I Need to Use SSML? AI Assistant has limited support for speech synthesis markup language (SSML) because AI-generated voices are designed to understand the relationship between words and adjust delivery accordingly. If you need to manually control pacing, you can add a pause. The most consistent way to do that is by inserting the syntax <break time="1.5s" /> into your script. This creates an exact and natural pause in the speech. For example: With their keen senses <break time="1.5s" /> cats are skilled hunters. Use seconds to describe a break of up to three seconds in length. You can try a simple dash - or em-dash — to insert a brief pause or multiple dashes for a longer pause. Ellipsis ... will also sometimes work to add a pause between words. However, these options may not work consistently, so we recommend using the syntax above for consistency. Just keep in mind that an excessive number of break tags can potentially cause instability. Prompting Techniques for v3 (beta) The v3 (beta) model introduces emotional control via audio tags, enabling voices to laugh, whisper, be sarcastic, or show curiosity, among other options. The following table lists various tags you can use to control vocal delivery and emotional expression, as well as to add background sounds and effects. It also includes some experimental tags for creative uses. Voice and emotion Sounds and effects Experimental [laughs], [laughs harder], [starts laughing], [wheezing] [whispers] [sighs], [exhales] [sarcastic], [curious], [excited], [crying], [snorts], [mischievously] Example: [whispers] Don’t look now, but I think they heard us. [gunshot], [applause], [clapping], [explosion] [swallows], [gulps] Example: [applause] Well, that went better than expected. [explosion] Never mind. [strong X accent] (replace X with desired accent) [sings], [woo] Example: [strong French accent] Zat is not what I ‘ad in mind, non non non. Aside from the audio tags, punctuation also impacts delivery. Ellipses (...) add pauses, capitalization emphasizes specific words or phrases, and standard punctuation mimics natural speech rhythm. For example: “It was VERY successful! … [starts laughing] Can you believe it?” Tips: Use audio tags that match the voice’s personality. A calm, meditative voice won’t shout, and a high-energy voice won’t whisper convincingly. Very short prompts can lead to inconsistent results. For more consistent, focused output, we suggest prompts over 250 characters. Some experimental tags may be less consistent across voices. Test thoroughly before use. Combine multiple tags for complex emotional delivery. Try different combinations to find what works best for your selected voice. The above list is simply a starting point; more effective tags may exist. Experiment with combining emotional states and actions to find what works best for your use case. Use natural speech, proper punctuation, and clear emotional cues to get the best results. Multilingual Voices Expand Your Reach Another compelling benefit of AI-generated text-to-speech is the ability to bridge language gaps, allowing you to connect with international audiences. With support for over 70 languages depending on the model—including some with multiple accents and dialects—AI Assistant’s text-to-speech helps your content resonate with a global audience. All you have to do is type or paste your script in the supported language you want AI Assistant to use. (Even though the voice description notes a specific accent or language, AI Assistant will generate the narration in the language used in your script.) Note that some voices tend to work best with certain accents or languages, so feel free to experiment with different voices to find the best fit for your needs. The table below provides a quick rundown of supported languages. Available in v3 (beta), Multilingual v2, and Flash v2.5: Arabic (Saudi Arabia) Arabic (UAE) Bulgarian Chinese Croatian Czech Danish Dutch English (Australia) English (Canada) English (UK) English (USA) Filipino Finnish French (Canada) French (France) German Greek Hindi Indonesian Italian Japanese Korean Malay Polish Portuguese (Brazil) Portuguese (Portugal) Romanian Russian Slovak Spanish (Mexico) Spanish (Spain) Swedish Tamil Turkish Ukrainian Available in v3 (beta) and Flash v2.5: Hungarian Norwegian Vietnamese Available only in v3 (beta): Afrikaans (afr) Armenian (hye) Assamese (asm) Azerbaijani (aze) Belarusian (bel) Bengali (ben) Bosnian (bos) Catalan (cat) Cebuano (ceb) Chichewa (nya) Estonian (est) Galician (glg) Georgian (kat) Gujarati (guj) Hausa (hau) Hebrew (heb) Icelandic (isl) Irish (gle) Javanese (jav) Kannada (kan) Kazakh (kaz) Kirghiz (kir) Latvian (lav) Lingala (lin) Lithuanian (lit) Luxembourgish (ltz) Macedonian (mkd) Malayalam (mal) Mandarin Chinese (cmn) Marathi (mar) Nepali (nep) Pashto (pus) Persian (fas) Punjabi (pan) Serbian (srp) Sindhi (snd) Slovenian (slv) Somali (som) Swahili (swa) Telugu (tel) Thai (tha) Urdu (urd) Welsh (cym) Create Sound Effects Using Prompts Sound effects that align with your theme and content can highlight important actions or feedback, like clicking a button or choosing a correct answer, offering a more engaging and effective e-learning experience. With AI Assistant’s sound effects, you can now use prompts to easily create nearly any sound imaginable. No more wasting time scouring the web for pre-made sounds that may cost extra! Start creating high-quality sound effects by going to the AI Assistant menu in the ribbon under the Home or Insert tab. Then, click the lower half of the Insert Audio icon, and choose Sound Effects. (You can also access it from the Audio dropdown within the Insert tab. Simply select Sound Effects under the AI Audio option.) In the text box, describe the sound effect you want and choose a duration. You can adjust the Prompt influence slider to give AI Assistant more or less creative license in generating the sound. Since AI Assistant understands natural language, sound effects can be created using anything from a simple prompt like “a single mouse click” to a very complex one that describes multiple sounds or a sequence of sounds in a specific order. Just note you have a maximum of 450 characters to describe the sound you want to generate. Play the following audio samples to listen to sound effects created using a simple prompt and a complex one. Your browser does not support the audio element. Prompt: A single mouse click Your browser does not support the audio element. Prompt: Dogs barking, then lightning strikes You can also adjust the Duration—how long the sound effect plays—up to a maximum of 22 seconds. For example, if your prompt is “barking dog” and you set the duration to 10 seconds, you’ll get continuous barking, but a duration of two seconds is one quick bark. Adjusting the Prompt Influence slider to the right makes AI Assistant strictly adhere to your prompt, while sliding it to the left allows more free interpretation. Pro tip: You can instantly determine if your sound effect has closed captions by the CC label that appears next to each output. Some Pro Terms to Know Using audio terminology—specialized vocabulary that audio experts use in their work—can help improve your prompts and produce even more dynamic sound effects. Here are a few examples: Braam: A deep, resonant, and often distorted bass sound used in media, particularly in trailers, to create a sense of tension, power, or impending doom. Whoosh: A quick, swooshing sound often used to emphasize fast motion, transitions, or dramatic moments. Impact: A sharp, striking noise used to signify a collision, hit, or sudden forceful contact, often to highlight a moment of action or emphasis. Glitch: A short, jarring, and usually digital noise that mimics a malfunction or distortion, commonly used to convey errors. Foley: The process of recreating and recording everyday sound effects like movements and object sounds in sync with the visuals of a film, videos, or other media. Here’s something fun to try! Generate a 3-second sound effect using the prompt “studio quality, sound designed whoosh and braam impact.” Increasing the duration may produce better sound effects but will also create more dead air towards the end. Pro tip: Onomatopoeias—words like “buzz,” “boom,” “click,” and “pop” that imitate natural sounds—are also important sound effects terms. Use them in your prompts to create more realistic sound effects. Video Tutorials Want to learn more before getting started? Check out our video tutorials for additional guidance on using AI Assistant to generate text-to-speech and sound effects. Create AI-generated Text-to-Speech Create AI-generated Sound Effects Articulate 360 Training also has additional video tutorials on using other AI Assistant features. Use AI Assistant features in Rise 360 Use AI Assistant features in Storyline 360 You must be logged in to your Articulate 360 account to watch the videos. Don’t have an account yet? Sign up for a free trial now!21KViews14likes0CommentsYou can now Localize Text to Speech on Rise and Storyline!
We’re thrilled to announce an exciting enhancement for all Localization Pro users! When you translate your course, the script will be automatically translated and new audio will be generated and re-inserted in the target language, saving you time and effort. What’s New With this update, translation now includes: Automatic translation of the script (the text that powers Text to Speech). Automatic generation and insertion of localized audio in your translated course. Validator updates: Validators can now update the script, and importing their suggestion will automatically update the audio as well. This functionality is now live in both Rise and Storyline. Important Note for Storyline Users This update applies to both AI Text to Speech and Legacy Text to Speech shapes. You’ll no longer need to manually replace audio after translation, it’s all handled automatically during the translation process! Next Steps We’d love to hear your feedback and learn about your experience using this new feature. Your insights are invaluable as we continue refining how teams create localized learning experiences.121Views2likes1CommentEdit AI Images Using Free Windows Tools
In this webinar, you'll learn how to edit your AI-generated images using free tools that are already on your Windows PC. We'll walk through practical techniques for using Microsoft Photos to polish photographic-style AI images and Microsoft Paint to refine vector and cartoon-style images. You'll see how to remove backgrounds, adjust colors, and make simple edits that get your AI images ready for your e-learning courses, all without buying expensive software.10Views0likes0Comments