ai
26 TopicsAI Assistant: Setting the Stage for AI Magic
Before diving into the course creation process, you want your authoring tool to be tailored to your specific requirements so you can focus on developing high-quality content. With features designed to streamline your workflow, AI Assistant allows you to do just that. Available only in Rise 360, AI Assistant’s AI course drafts and AI settings features boost your efficiency—setting the stage for AI magic! Get a Head Start Generative AI speeds up course creation, but not all AI tools are built for e-learning, often leading to more time fixing than creating. Thankfully, AI Assistant’s AI course drafts workflow helps you turn concepts into structured, learner-focused content with just a few clicks. The AI course drafts workflow involves four simple steps: gathering context, configuring course details, reviewing the course outline, and generating lesson drafts. The official AI course drafts user guide explains each step in more detail. Here are some tips to consider as you get started: During the first step, you can specify particular requirements when providing context, not just a description of the course content. For example, you might ask AI Assistant to write your content in a specific voice or character. In the second step, you can regenerate your Course information (topic, tone, audience, goals) by selecting any text and clicking the sparkle icon to edit with AI Assistant. To update all these fields at once, click the Edit with AI button to the right of the Course information heading, add any special instructions if needed, then click Try again. Similarly, you can manually edit each field or use Edit with AI next to the Learning objectives heading to update all learning objectives at once. To update both the Course information and Learning Objectives at the same time, use the global Edit with AI button in the upper right. Just like in the second step, you can ask AI Assistant for writing help when reviewing and refining the generated course outline. Remember, AI Assistant generates content in between steps, but you can always click Stop and go back to return to a previous step. And if you go back to update your input in the previous steps, the global Edit with AI button shows a pulsing blue dot to remind you of the option to regenerate the content. After generating a course draft, you can easily return to the workflow by navigating to the AI Assistant menu on the course overview page and clicking Return to AI Outline. When reviewing your inputs from the Create course with AI view, click on the tabs at the top or use the navigation buttons at the bottom to quickly switch between steps. Need to leave your course drafting process? Don’t worry—AI Assistant will remember your progress during the first three steps and resume where you left off once you come back. Canceling the process while AI Assistant is still creating lesson drafts will also delete lessons that have already been generated. Keep Any Documents Handy As a course author, you probably start gathering assets and reference materials right after choosing a topic and writing an outline. While you can now generate content from scratch using AI, you may also want to create courses based on existing documents. You can import source documents to use as a reference whenever you want to generate new content using AI Assistant. But instead of uploading reference materials each time, you can keep them all in one place by uploading them in the Source content tab of the AI settings window before you start. Access AI settings from the AI Assistant dropdown menu in the upper right. Drag and drop files into the Source content tab or click Choose file to upload them. Supported file types and limitations are listed in the following table. Content Type File Extension File Size Limit Character Limit Portable Document Format .pdf 1 GB 200k Microsoft Word .doc , .docx Microsoft PowerPoint .ppt , .pptx Text .text , .txt Captions .vtt , .srt , .sbv , .sub Storyline 360 .story Audio .mp3 , .wav , .m4a Video .mp4 , .webm , .ogg Website URL — — Tips: For PDF, Word, PowerPoint, and Storyline 360 source documents, AI Assistant only references extractable text. Images, audio, and video are not included. To use an existing Rise 360 course as source content, export the course to PDF, then upload the resulting file. Audio and video files are transcribed and then processed like caption files, so it’s faster if you just upload a caption file. Only text-based content contained in publicly accessible URLs is supported. Website URLs that require authentication, block crawlers, redirect to inaccessible content, or sit behind paywalls will not work. While there’s no hard limit on how many files you can upload to use as source content for AI Assistant, we recommend uploading only what you need for faster processing. If you don’t have entire files as reference, you can also copy and paste content from the source into the text box provided.7.7KViews16likes0CommentsAI Assistant: Summarizing Swiftly with Summary Generation
As a course author, you know summaries matter. They help learners retain key takeaways, enhancing the overall effectiveness of your training and making it easier for them to grasp and apply new knowledge. But after you’ve completed a course full of comprehensive content, the last thing you may want to do is summarize everything. Fortunately, AI Assistant can help. Available in Rise 360 and Storyline 360, AI Assistant's summary generation feature helps you summarize swiftly by listing the key takeaways from all the lessons in your course. Then, you can customize the summary—choose the level of detail, pinpoint a focus area, and set the tone, audience, and style. Pro tip: For best results, ask for a range rather than a specific number of summary items. For example, ask for “3 to 5 bullet list items” rather than “3 bullet list items”. Generate Summaries in Rise 360 AI Assistant can generate summaries at both the course and lesson level. For a lesson summary, AI Assistant summarizes the content using a single paragraph block, while a course summary uses several block types, including paragraph, list, note, and statement. Access the course-level summary generation feature in the AI Assistant dropdown menu in the upper right of the course overview page. For lesson-level summary generation, use the AI Assistant dropdown menu within a lesson. Generate Summaries in Storyline 360 In Storyline 360, you can access summary generation in story view or slide view. Story view gives you the option to create a course-level summary based on the content of the project, scenes, or specific slides, while in slide view, you can create a lesson-level summary based on the current scene or select slides. Choose between a prebuilt layout and a blank slide for the generated summary, as shown in the following images. Note that when you select Prebuilt layout, AI Assistant may create multiple slides to fit all bullet points. See below for examples of each option. Tips: If you pick a prebuilt layout, AI Assistant will create and use a layout called “Summary Title” for the generated content that fits in one slide. Otherwise, it will create and use an additional layout called “Summary Content” for extra slides. If you select a blank slide, AI Assistant will not necessarily use the layout named “Blank,” but instead will choose the first available layout without content or a placeholder. If there's no empty layout, AI Assistant will choose the first layout from the slide master.1.1KViews1like0CommentsMerge media and audio
Hi!! I am working on a project where I have one video file and am including multiple AI audio clips to it. I finally got the clips to match the video but when I pause the video the audio continues. Would I have to set up conditional triggers to get the audio to pause when users pause the video? My one video file has 11 audio clips to it, would I have to set up individual triggers (i.e: pause media when user clicks the video file) or is there a more effective way to complete this?6Views0likes0CommentsAI Assistant: Producing Highly Realistic Audio
As a course author, you want to do more than just present information—you want to create multi-sensory e-learning experiences that resonate with learners. Using sound creatively can help you get there. AI Assistant’s text-to-speech and sound effects features let you create highly realistic AI-generated voices and sound effects for more immersive and accessible content. Originally, both of these features could only be accessed in Storyline 360. However, as of the July 2025 update, AI Assistant in Rise 360 can generate text-to-speech narration. Visit this user guide to get started creating AI-generated narrations in Rise 360. In Storyline 360, these features can be accessed from the Insert Audio dropdown in the AI Assistant menu within the ribbon. Find them under the Home or Insert tab when you’re in slide view or from the AI Assistant side panel as quick action buttons for added convenience. Bring Narration to Life with AI-generated Voices If you’ve ever used classic text-to-speech, you probably wished the voices sounded less, well, robotic. AI Assistant’s text-to-speech brings narration to life with contextually aware AI-generated voices that sound more natural—and human! Check out the difference in quality between a standard voice, neural voice, and AI-generated voice by playing the text-to-speech examples below. Standard Voice Your browser does not support the audio element. Neural Voice Your browser does not support the audio element. AI-generated Voice Your browser does not support the audio element. To get started, click the Insert Audio icon in the AI Assistant menu to open the Generate AI Audio dialog box. A library of AI-generated voices—which you can filter by Gender, Age, and Accent—displays under the Voices tab. The voices also have descriptions like “deep,” “confident,” “crisp,” “intense,” and “soothing” and categories that can help you determine their ideal use cases, from news broadcasts to meditation, or even ASMR. Find these qualities under the voice’s name, and use the play button to preview the voice. Currently, there are 22 pre-made voices to choose from, and you can mark your favorites by clicking the heart icon. This way, you can easily access your preferred voices without having to scroll through the list. Toggle the View option to Favorites to find all your favorite voices, or In project to see voices used in the current project. Once you’ve decided on a voice, click the button labeled Use to switch to the Text-to-Speech tab. Your chosen voice is already pre-selected. Next, enter your script in the text box provided or click the add from slide notes link to copy notes from your slide. The script can be a maximum of 5,000 characters. For accessibility, leave the Generate closed captions box checked—AI Assistant will generate closed captions automatically. You can instantly determine if your text-to-speech narration has closed captions by the CC label that appears next to each output. Find More Voices in the Voice Library In addition to the pre-made voices, you also have access to an extended voice library with thousands of ultrarealistic, AI-generated voices that can be filtered by age, gender, and use case. Discover the right voice for your content by clicking the Voice Library button on the right under the My Voices tab. Check out this article to learn how to use the voice library. Note: Pre-made voices may lose compatibility with newer voice models and become unavailable. Voices in the voice library might also disappear after their removal notice period ends or if the creator chooses to remove them. If this occurs, your generated narration will still play but can no longer be edited. Switch to a different voice if you need to modify an existing narration created with a voice that’s no longer supported. Adjust the Voice Settings Unlike classic text-to-speech, the AI-generated voices in AI Assistant’s text-to-speech can be customized for a tailored voice performance. The Model setting lets you choose from three different options: v3 (beta) - Most expressive, high emotional range, and contextual understanding in over 70 languages. Allows a maximum of 3,000 characters. Note that this model is actively being developed and is available only in Rise 360. Functionalities might change, or you might encounter unexpected behavior as we continue to improve it. For best results, check out some prompting techniques below. Multilingual v2 (default model) - Highly stable and exceptionally accurate lifelike speech with support for 29 languages. Allows a maximum of 10,000 characters. Flash v2.5 - Slightly less stable, but can generate faster with support for 32 languages. Allows a maximum of 40,000 characters. The setting for Stability controls the balance between the voice’s steadiness and randomness. The Similarity setting determines how closely the AI should adhere to the original voice when attempting to replicate it. The defaults are set to 0.50 for the stability slider and 0.75 for the similarity slider, but you can play around with these settings to find the right balance for your content. Additional settings include Style exaggeration, which amplifies the style of the original voice, and Speaker boost, which enhances the similarity between synthesized speech and the voice. Note that if either of those settings is adjusted, generating your speech will take longer. Note: Some voices in the Multilingual v2 model tend to have inconsistent volume—fading out toward the end—when generating lengthy clips. This is a known issue with the underlying model, and our AI subprocessor for text-to-speech is working to address it. In the meantime, we suggest the following workarounds: Use a different voice Switch to the Flash v2.5 model Increase the voice’s stability Manually break your text into smaller chunks to generate shorter clips Do I Need to Use SSML? AI Assistant has limited support for speech synthesis markup language (SSML) because AI-generated voices are designed to understand the relationship between words and adjust delivery accordingly. If you need to manually control pacing, you can add a pause. The most consistent way to do that is by inserting the syntax <break time="1.5s" /> into your script. This creates an exact and natural pause in the speech. For example: With their keen senses <break time="1.5s" /> cats are skilled hunters. Use seconds to describe a break of up to three seconds in length. You can try a simple dash - or em-dash — to insert a brief pause or multiple dashes for a longer pause. Ellipsis ... will also sometimes work to add a pause between words. However, these options may not work consistently, so we recommend using the syntax above for consistency. Just keep in mind that an excessive number of break tags can potentially cause instability. Prompting Techniques for v3 (beta) The v3 (beta) model introduces emotional control via audio tags, enabling voices to laugh, whisper, be sarcastic, or show curiosity, among other options. The following table lists various tags you can use to control vocal delivery and emotional expression, as well as to add background sounds and effects. It also includes some experimental tags for creative uses. Voice and emotion Sounds and effects Experimental [laughs], [laughs harder], [starts laughing], [wheezing] [whispers] [sighs], [exhales] [sarcastic], [curious], [excited], [crying], [snorts], [mischievously] Example: [whispers] Don’t look now, but I think they heard us. [gunshot], [applause], [clapping], [explosion] [swallows], [gulps] Example: [applause] Well, that went better than expected. [explosion] Never mind. [strong X accent] (replace X with desired accent) [sings], [woo] Example: [strong French accent] Zat is not what I ‘ad in mind, non non non. Aside from the audio tags, punctuation also impacts delivery. Ellipses (...) add pauses, capitalization emphasizes specific words or phrases, and standard punctuation mimics natural speech rhythm. For example: “It was VERY successful! … [starts laughing] Can you believe it?” Tips: Use audio tags that match the voice’s personality. A calm, meditative voice won’t shout, and a high-energy voice won’t whisper convincingly. Very short prompts can lead to inconsistent results. For more consistent, focused output, we suggest prompts over 250 characters. Some experimental tags may be less consistent across voices. Test thoroughly before use. Combine multiple tags for complex emotional delivery. Try different combinations to find what works best for your selected voice. The above list is simply a starting point; more effective tags may exist. Experiment with combining emotional states and actions to find what works best for your use case. Use natural speech, proper punctuation, and clear emotional cues to get the best results. Multilingual Voices Expand Your Reach Another compelling benefit of AI-generated text-to-speech is the ability to bridge language gaps, allowing you to connect with international audiences. With support for over 70 languages depending on the model—including some with multiple accents and dialects—AI Assistant’s text-to-speech helps your content resonate with a global audience. All you have to do is type or paste your script in the supported language you want AI Assistant to use. (Even though the voice description notes a specific accent or language, AI Assistant will generate the narration in the language used in your script.) Note that some voices tend to work best with certain accents or languages, so feel free to experiment with different voices to find the best fit for your needs. The table below provides a quick rundown of supported languages. Available in v3 (beta), Multilingual v2, and Flash v2.5: Arabic (Saudi Arabia) Arabic (UAE) Bulgarian Chinese Croatian Czech Danish Dutch English (Australia) English (Canada) English (UK) English (USA) Filipino Finnish French (Canada) French (France) German Greek Hindi Indonesian Italian Japanese Korean Malay Polish Portuguese (Brazil) Portuguese (Portugal) Romanian Russian Slovak Spanish (Mexico) Spanish (Spain) Swedish Tamil Turkish Ukrainian Available in v3 (beta) and Flash v2.5: Hungarian Norwegian Vietnamese Available only in v3 (beta): Afrikaans (afr) Armenian (hye) Assamese (asm) Azerbaijani (aze) Belarusian (bel) Bengali (ben) Bosnian (bos) Catalan (cat) Cebuano (ceb) Chichewa (nya) Estonian (est) Galician (glg) Georgian (kat) Gujarati (guj) Hausa (hau) Hebrew (heb) Icelandic (isl) Irish (gle) Javanese (jav) Kannada (kan) Kazakh (kaz) Kirghiz (kir) Latvian (lav) Lingala (lin) Lithuanian (lit) Luxembourgish (ltz) Macedonian (mkd) Malayalam (mal) Mandarin Chinese (cmn) Marathi (mar) Nepali (nep) Pashto (pus) Persian (fas) Punjabi (pan) Serbian (srp) Sindhi (snd) Slovenian (slv) Somali (som) Swahili (swa) Telugu (tel) Thai (tha) Urdu (urd) Welsh (cym) Create Sound Effects Using Prompts Sound effects that align with your theme and content can highlight important actions or feedback, like clicking a button or choosing a correct answer, offering a more engaging and effective e-learning experience. With AI Assistant’s sound effects, you can now use prompts to easily create nearly any sound imaginable. No more wasting time scouring the web for pre-made sounds that may cost extra! Start creating high-quality sound effects by going to the AI Assistant menu in the ribbon under the Home or Insert tab. Then, click the lower half of the Insert Audio icon, and choose Sound Effects. (You can also access it from the Audio dropdown within the Insert tab. Simply select Sound Effects under the AI Audio option.) In the text box, describe the sound effect you want and choose a duration. You can adjust the Prompt influence slider to give AI Assistant more or less creative license in generating the sound. Since AI Assistant understands natural language, sound effects can be created using anything from a simple prompt like “a single mouse click” to a very complex one that describes multiple sounds or a sequence of sounds in a specific order. Just note you have a maximum of 450 characters to describe the sound you want to generate. Play the following audio samples to listen to sound effects created using a simple prompt and a complex one. Your browser does not support the audio element. Prompt: A single mouse click Your browser does not support the audio element. Prompt: Dogs barking, then lightning strikes You can also adjust the Duration—how long the sound effect plays—up to a maximum of 22 seconds. For example, if your prompt is “barking dog” and you set the duration to 10 seconds, you’ll get continuous barking, but a duration of two seconds is one quick bark. Adjusting the Prompt Influence slider to the right makes AI Assistant strictly adhere to your prompt, while sliding it to the left allows more free interpretation. Pro tip: You can instantly determine if your sound effect has closed captions by the CC label that appears next to each output. Some Pro Terms to Know Using audio terminology—specialized vocabulary that audio experts use in their work—can help improve your prompts and produce even more dynamic sound effects. Here are a few examples: Braam: A deep, resonant, and often distorted bass sound used in media, particularly in trailers, to create a sense of tension, power, or impending doom. Whoosh: A quick, swooshing sound often used to emphasize fast motion, transitions, or dramatic moments. Impact: A sharp, striking noise used to signify a collision, hit, or sudden forceful contact, often to highlight a moment of action or emphasis. Glitch: A short, jarring, and usually digital noise that mimics a malfunction or distortion, commonly used to convey errors. Foley: The process of recreating and recording everyday sound effects like movements and object sounds in sync with the visuals of a film, videos, or other media. Here’s something fun to try! Generate a 3-second sound effect using the prompt “studio quality, sound designed whoosh and braam impact.” Increasing the duration may produce better sound effects but will also create more dead air towards the end. Pro tip: Onomatopoeias—words like “buzz,” “boom,” “click,” and “pop” that imitate natural sounds—are also important sound effects terms. Use them in your prompts to create more realistic sound effects. Video Tutorials Want to learn more before getting started? Check out our video tutorials for additional guidance on using AI Assistant to generate text-to-speech and sound effects. Create AI-generated Text-to-Speech Create AI-generated Sound Effects Articulate 360 Training also has additional video tutorials on using other AI Assistant features. Use AI Assistant features in Rise 360 Use AI Assistant features in Storyline 360 You must be logged in to your Articulate 360 account to watch the videos. Don’t have an account yet? Sign up for a free trial now!17KViews14likes0CommentsAI Assistant: Building Effective Quizzes and Knowledge Checks
Developing a good quiz or knowledge check is essential for assessing and reinforcing learning. But, as every course author knows, it’s also time-consuming. Designing questions that are clear, relevant, and aligned with your learning objectives isn't easy. Effortless Quizzes Available in Rise 360 and Storyline 360, AI Assistant’s quiz generation feature allows you to create a full quiz based on existing lessons in just a few clicks. Quick Tip: When details are missing, AI Assistant may rely on general knowledge compiled from its training to fill in gaps. For best results, provide as much context and source materials as possible. More information helps AI Assistant tailor content to your needs, and using custom prompts can further guide it to stay focused on your course content instead of drawing from general knowledge. Rise 360 In Rise 360, select Quiz generation from the AI Assistant dropdown menu within the course overview page. Customize your questions via prompt—set a focus topic, learning objective, and level of difficulty—or skip directly to quiz generation. Once the quiz is generated, you can open it to see more options, such as adding or editing questions. Editing your quiz allows you to use AI Assistant to fine-tune the questions, answer choices, and feedback. For example, you can prompt AI Assistant to turn a multiple choice question into a multiple response and add more answer choices. You can also change the learning objective or increase the difficulty level. If you want to edit the question feedback, you can do so with the write and edit inline feature. Simply select the feedback text you want to modify and click the sparkle icon in the floating toolbar to start editing with AI Assistant. You have the option to generate new questions from here as well. AI Assistant in Rise 360 supports the following question types: Multiple choice Multiple response Fill-in-the-blank Matching Storyline 360 In Storyline 360, select the Quiz icon in the AI Assistant menu from the ribbon or click the Generate quiz button from the AI Assistant tab in the side panel. Select all or just specific scenes and slides as a reference, and then enter a prompt to customize the questions. You can also skip customization to generate the questions in a new slide or a question bank. When you choose the latter, AI Assistant will create a new question bank and insert a new slide draw from it. For best results, specify your learning objectives or ask AI Assistant to focus on a topic or difficulty level. Once the quiz has been generated, you can continue to refine it by adding, deleting, or editing questions. AI Assistant will also generate a link that you can easily click to jump to the newly created questions. When you replace a question after editing, the original question slide will be deleted and a new slide added with the new question. Any other objects or custom triggers on the original question slide will be lost. To prevent that loss, choose the Insert below option and then copy and paste objects into the new slide before deleting the original question slide. AI Assistant in Storyline 360 supports the following question types: Multiple choice Multiple response How many questions are generated? The underlying AI model can have difficulty fulfilling requests for a specific number of questions. In Rise 360, AI Assistant generates questions that cover the key points across your entire course, up to a maximum of 25 questions. In practice, it tends to generate one to two questions per lesson. In Storyline 360, you can use the word count of your text content to determine how many questions are generated. Initially, AI Assistant splits text content into segments of 1,000 words each, with a maximum of seven segments. If your text content exceeds 7,000 words, AI Assistant splits them evenly over seven segments to stay within the limit. Each segment returns two questions, so you’ll always get at least two questions, up to a maximum of 14, depending on the total word count. Single Question Generation To insert a single question as a knowledge check, select the Question icon from the ribbon or click the Generate question button in the side panel. Select all or just specific slides as a reference, and then enter a topic. AI Assistant generates a full question draft that you can modify using custom prompts. Get creative in providing additional directions for AI Assistant to follow, or try some of the following prompts: Adjust the difficulty level Change the Bloom’s Taxonomy level Change the tone and target audience Change the question type from multiple choice to multiple response, or vice versa You also have the option to choose one of the prebuilt prompts—either Change focus or Add an answer. Once you’re satisfied with the draft, click Insert to generate the question. Tips: AI Assistant uses the Question layout for question or quiz slides. Customize this layout if you want to apply your personal or company brand style to question or quiz slides generated by AI Assistant. To add interactivity, try a freeform question. Just copy the question draft and cancel the quiz generation process. Paste the content into a new slide, make adjustments, and then convert the slide into a freeform question. Quick Knowledge Checks Available only in Rise 360, a knowledge check block can be generated based on the current lesson. Go to the AI Assistant menu in the upper right and then choose Generate knowledge check when you’re inside a lesson. You can also find this option in the block library under the Knowledge Check menu. Enter a topic, select the question type—choose from multiple choice, multiple response, fill-in-the-blank, or matching—and AI Assistant will generate a full draft. Prompt AI Assistant to make the changes you want, such as changing the learning objective, difficulty level, or question type. You also have the option to choose prebuilt prompts, like changing the focus, answer choices, or the feedback type. Once you’ve finalized the question, click the Insert block button below the draft. Your knowledge check is inserted at the bottom of the page. Anytime you need to modify the block, simply hover over it and click AI Block Tools (the sparkle icon) on the left. You can select Edit with AI to edit the knowledge check using AI Assistant’s block editing feature. Pro tip: Instantly convert blocks into interactive, AI-generated knowledge checks that boost learner retention by hovering over a supported block and clicking AI Block Tools from the content menu on the left. Choose Instant convert from the dropdown, then select Knowledge Check. The new knowledge check will be inserted right below the original block. Video Tutorials Want to learn more before getting started? Check out our video tutorials for additional guidance on using AI Assistant to generate quizzes and knowledge checks. Create AI-generated Quizzes in Rise 360 Create AI-generated Knowledge Checks in Rise 360 Create AI-generated Questions in Storyline 360 Articulate 360 Training also has additional video tutorials on using other AI Assistant features. Use AI Assistant features in Rise 360 Use AI Assistant features in Storyline 360 You must be logged in to your Articulate 360 account to watch the videos. Don’t have an account yet? Sign up for a free trial now!3.3KViews1like0CommentsInstructional Design for Financial Services: Lessons from the UAE
Hello Articulate Community, I’m reaching out to connect with fellow learning designers, educators, and organizations exploring fresh ways to drive professional development through impactful e-learning. I’m an Instructional Designer based in Dubai, specializing in designing and developing training solutions and digital learning experiences using Articulate tools. Over the years, I’ve had the opportunity to scale large-scale training programs across the UAE, particularly in the financial sector, including banks and insurance companies. What makes my approach unique is the balance between instructional creativity and scalability. I’ve not only crafted engaging modules but also rolled out end-to-end learning journeys, covering classroom, digital, and blended formats, ensuring they align with compliance standards, industry regulations, and organizational goals. Some areas I’m passionate about include: Designing custom e-learning modules (SCORM-compliant) that seamlessly integrate into existing LMS. Building training frameworks that can scale from a single team to entire institutions. Creating learning content that bridges complex financial concepts with learner-friendly, practical delivery. I’m now exploring freelance opportunities to collaborate with organizations that want to reimagine their learning, scale their programs, and maximize learner engagement. If your organization is looking for instructional design expertise with proven UAE financial sector experience, I’d be happy to connect and discuss how we can work together. Feel free to reach out here or connect directly. Looking forward to learning and collaborating with this inspiring community!40Views0likes0CommentsPowerPoint Meets AI: Testing 5 Motion Models
When it comes to adding AI motion, many designers bounce from one model to the next, often frustrated by strange results. Some eLearning apps don’t give the motion we want natively, and others take too long to configure, making it hard to know where to start. So, I tested 5 AI models on the same slide with the same prompt to see how each performed. The results were surprising: one nailed the physics, another went off into odd morphs, and a few missed the mark entirely. The lesson? Every model has strengths, but the right fit depends on your design goals and willingness to experiment. Watch the full tutorial: https://youtu.be/Udtg1X81mow Download the AI Models for Motion – Comparisons chart: [AI Models for Motion - Comparisons.pdf - Google Drive]74Views2likes2CommentsAI Assistant in Storyline 360: Voice Library
You already know that AI Assistant makes generating ultra-realistic text-to-speech narrations easy. Now, with the addition of a voice library with thousands of voices and intuitive search and filter options, finding the right voice for your content is even easier. Keep reading to learn how to use the voice library in Storyline 360. Browse Voices Start exploring with either of the following methods: In Slide View, go to the Home or Insert tab on the ribbon. Then, click the Insert Audio drop-down arrow and choose Voices. In Slide View, go to the Insert tab and click the Audio drop-down arrow. Then, hover over AI Audio and choose Voices. When the Generate AI Audio window displays, click the Voice Library button on the right. On the next screen, you’ll see a list of all the available voices in the library. Each row displays the name, description, and other details about the voice. Scroll down the list to load more voices. Some voices have long descriptions, so some of the text may be hidden. Hover over the description to reveal a tooltip with the complete text. Preview Voices To preview a voice, click the play icon—a little circle with a play button—just to the left of each name. You can preview voices one at a time. Use a Voice Once you find the voice you want, click the Use button located on the right. This adds the chosen voice to your library under the My Voices tab. The screen then automatically switches to the Text-to-Speech tab, where you can generate narrations using the selected voice. If you find a voice you’d like to use later, save it to your library by clicking the Add to My Voices pill button located just to the left of the Use button. Once added, the button changes state to display Remove from My Voices. If you want to remove the voice from your library, click the button and it reverts to its initial state. You can add up to 10,000 voices to your library. The Added Voices counter in the upper right corner displays the remaining number of voices you can add. Once you’ve added 10,000, the buttons become grayed out. Other information about each voice is shown at the top of the buttons. Find the date a voice was added, its quality, the number of times it’s been added to user libraries, the total number of audio characters the voice has generated, and the removal notice period. Voice Removal Notice Period A voice may have a notice period, which specifies how long you’ll be able to access the voice if its creator decides to remove it from the voice library. When that happens, the removed voice will no longer be available from the library. If you’ve previously added it to My Voices, the removed voice will still appear on your list and can be used to generate new content, but you’ll see a warning and the date when it’s no longer available. Once the notice period expires, the voice will display an error, and it can no longer be previewed or used to generate new content. You can remove it to free up one of your custom voice slots. Most voices have notice periods, but some don’t. Voices without a notice period disappear immediately from My Voices and the voice library if the voice creator decides to delete them. Generated content using a voice that’s been removed from the voice library will continue to function as a regular audio file. Search, Sort, and Filter Voices Right above the list of voices are the search, sort, and filter functions. From there, you can do any of the following: Search specific voices by entering text into the search box. You can search voices by name, keyword, or description. Note that voice library uses a fuzzy search technique—finding results that are similar to, but not necessarily an exact match for, the given search term. Reorder the list by Trending, Latest, Most Used, or Most Characters Generated using the Sort dropdown menu. By default, voices are sorted by Most Used. Find voices based on age, gender, and use case with Filters. The table below provides a list of available options for each filter. Age Young, Middle aged, Old Gender Man, Woman, Non-binary Use Case Narrative & Story, Conversational, Characters & Animation, Social Media, Entertainment & TV, Advertisement, Informative & Educational3.4KViews1like0CommentsAI Voice Generation emphasis in SL
Hi, Has anybody discovered a way to reliably coax the AI voice generation engine in SL360 to add emphasis to a word or phrase? For example in written text such as "read the instructions before starting", the italics and bold would strongly indicate the importance of reading before starting, and if I was creating my own voice recording I'd heavily lean into the word "before", to stress this. I haven't yet found a way to do this with the AI VG engine, and you can't add bold or italics to the text dialog. I've experimented with asterisks etc., but it tends to just garble the output. I know the whole point of AI is that it is supposed to work stuff like this out for itself through context and should do this automatically, but I do think it sometimes needs some guidance. Any ideas or tips? Thanks PaulSolved213Views2likes14CommentsFun Animated Timer for Gamification Projects
Hi Articulate heroes, I wanted to highlight one very fast but fun-looking way to create timers for interactive projects. I've learned about this way from "Gamification Series; 05: Creating Tension with Timers." You can check out these amazing webinars Gamification series, and there was a few different ways to add timers to projects. I used it in my recent project "Cooking Frienzy" - Jeopardy-style cooking-game. (btw you can check out the full game: Cooking Frienzy So, here are the steps: 1) Create / find a "timer" picture - it could be any image with transparent background what works for your theme (in my case it is "Pomodoro" timer, made with AI help, saved as .png ) 2) Add this image as a picture (insert an image). 3) Go to Animation tab 4) Choose Exit Animation - "Wipe", go to "Effect Options" - "From Right". Set the animation timer for whatever time you need (10 sec., 30 sec., 1 min etc.) 5) Set triggers to what will happen after the timer is done (animation completed): i.e. jump to the next slide, show "result-fail" etc. 6) Preview and adjust if needed 🤞179Views3likes5Comments