ai assistant
442 TopicsAI Features Not Showing UP
Hey Community, I'm in the process of trying to update some yearly content and when I went to utilize the new AI features in Storyline, they were not showing up. I confirmed that I had an active, yet none of the features were available. I thought it might just be that I was trying to update an older piece of content (from before I had the AI License) but I opened up a more recent piece of content and the features were still not showing up. Is anyone else running into this problem? I'm trying to figure out if perhaps they just got disabled or if there is something else wrong here. Thanks.475Views2likes13CommentsCan I create a consistent AI character
Hello and Happy New Week, I was wondering if there was a way to create a consistent character using AI Assist? I picture numerous images of the same "person" doing different tasks, over different slides, throughout my E-Learning. Is such a thing possible or should I just go grab some lunch and stop being delusional? TGIM, Dan197Views1like3CommentsRise 360 - problem with an AI-generated course
Hi, community, I have an issue with an AI generated course. The course was generated on February 3rd. Today, while adjusting the theme and other details, I noticed that the second lesson is still generating. Interestingly, if I access the snapshot from the day the course was created, I can view the second lesson, but when I try to restore that version, the second lesson still appears as if it’s being generated by AI. link to the video where I show the issue.46Views0likes2CommentsAdding alt-tag in Rise
Hello It would be so helpful to view a larger image preview when adding alt-tags to make it easier in Rise Process blocks. Otherwise, it is hard to write them without seeing the image clearly. See screenshot attached. Another suggestion is please add a feature where AI automatically adds the alt-tags and we edit it later to suit. Thank you Sumant12Views0likes1CommentJapanese AI Text-to Speech Quality
Has anyone had success using a particular voice for the AI Text-to-Speech for Japanese? I have tried several and continue to get feedback that the text-to-speech quality is very poor, and some voices randomly mix Japanese, Chinese, and Korean accents even within a single paragraph. All of the translated audio text has been thoroughly reviewed and is accurate. Here are the latest voices I have tried and the general feedback received: Alva: Good (the voice used in the course) Akira: Poor Hajime: Poor Gojo: Acceptable Ken: Good (possible alternative, but does not address the root cause) Masa: Acceptable In the Advanced settings, I have Multilingual v2 selected for each one. Note that even for the voices marked good or acceptable, the reviewer still indicates that the quality is not at a level that they feel can be used. They believe the root cause is that the program invoking text-to-speech does not consistently retain the selected voice option and that it incorrectly auto-detects the language and switches voice options even within a single sentence. I am not sure how to respond to that. I would appreciate any insights anyone else has about this topic!14Views0likes1CommentProfessional Hearts - a microlearning teaser
I tried to build a course within a half hour, with 3 goals in mind: use entirely Articulate content library assets use AI-generated course content include some kind of interactive element, also AI vibe-coded Here are the results! The results show that letting AI completely dictate the content, when it comes to more mundane trainings like this, is quite effective if you're just trying to "get something out there." The game looked almost exactly what I thought it would, it required a very precise prompt, which I'll share here so that you can try this out for yourself: "Now, a course showstopper: Using the articulate Rise coded block, I want to include a small coded block that provides an interactive game: a "shoot the moving target," meant to get the learner mildly amused and continuing to follow the training. The game is a side-facing view with a bow and arrow (or just an arrow) on the left side of the screen and two objects on the right: a heart (representing workplace relationships) and a chart/graph (representing work). They are moving up and down on the right side of the screen, offset so that the heart is in front and the chart is behind. they are not intersecting with each other, and moving in alternating times and speeds such that there is not always a clear line that could be drawn from the bow and arrow to the chart in the rear. the objective is for the learner to press the "fire" button and launch the moderately-fast moving arrow across the screen to hit the "work" chart icon and not the "romance" heart icon. If they hit the chart it's a "great job" outcome, if they hit the heart it's a "Love over work, eh?", and if they hit nothing then it's a "try again". in any outcome, a "start over" button appears in the middle of the screen and resets the interactive element. Can you provide the code for this?" I was hoping it would use the "code" block to keep it entirely within Articulate Rise, but I think it recognized right away for what I was wanting to do that an html code embed was more effective and flexible (hosting the code on my GitHub).15Views0likes0CommentsPronunciation in AI Audio (TTS)
I am hoping someone can clarify the use of SSML with Eleven Labs voices in Storyline 360. I've read variously on E-Learning Heroes that SSML is not supported with Eleven Labs voices or that it is supported but only for <phoneme> and <break> or it's supported for other tags too. I've been trying to use it with the phoneme tag but when I generate speech, it simply skips over the tagged content. This is a sample of one use I'm trying to get to work: Repair of this injury can be addressed laparoscopically in stable patients. The first priority is adequate exposure. Additional ports should be placed as needed to isolate the injury and improve visualization. If the bowel is friable it can be handled indirectly using <speak><phoneme alphabet="ipa" ph="æ.tɹ.ə.mˈæɾɪk">atraumatic</phoneme></speak> graspers on the mesentery, and the edges of an enterotomy can be approximated to limit contamination if the injury is small. When generated, all the speech between the opening <speak> and close </speak> tag is simply skipped but the rest of the speech is rendered. Since I am not sure if SSML is even supported, I'm not sure if I'm not rendering the tags correctly, or if it simply isn't supported.Solved234Views1like5CommentsAI Assistant & Rise: Smart Strategies for Efficient E-Learning
New to AI Assistant and Rise? This session is your gateway to creating engaging e-learning content with ease. Learn how to kickstart your projects, blend AI-generated content seamlessly, and add interactivity to your courses. We'll cover practical tips for working with text, images, and quizzes to help you build impressive courses quickly.505Views0likes0CommentsOverview of AI Assistant in Storyline
Speed up content creation and unleash your creativity with AI Assistant in Storyline. In this session, you’ll learn how to partner with AI Assistant to improve writing, generate content and images, create text-to-speech narration, add sound effects, and more.16Views0likes0Comments