AI
36 TopicsAI Assistant: Setting the Stage for AI Magic
Before diving into the course creation process, you want your authoring tool to be tailored to your specific requirements so you can focus on developing high-quality content. With features designed to streamline your workflow, AI Assistant allows you to do just that. Available only in Rise 360, AI Assistant’s AI course drafts and AI settings features boost your efficiency—setting the stage for AI magic! Get a Head Start Generative AI speeds up course creation, but not all AI tools are built for e-learning, often leading to more time fixing than creating. Thankfully, AI Assistant’s AI course drafts workflow helps you turn concepts into structured, learner-focused content with just a few clicks. The AI course drafts workflow involves four simple steps: gathering context, configuring course details, reviewing the course outline, and generating lesson drafts. The official AI course drafts user guide explains each step in more detail. Here are some tips to consider as you get started: During the first step, you can specify particular requirements when providing context, not just a description of the course content. For example, you might ask AI Assistant to write your content in a specific voice or character. In the second step, you can regenerate your Course information (topic, tone, audience, goals) by selecting any text and clicking the sparkle icon to edit with AI Assistant. To update all these fields at once, click the Edit with AI button to the right of the Course information heading, add any special instructions if needed, then click Try again. Similarly, you can manually edit each field or use Edit with AI next to the Learning objectives heading to update all learning objectives at once. To update both the Course information and Learning Objectives at the same time, use the global Edit with AI button in the upper right. Just like in the second step, you can ask AI Assistant for writing help when reviewing and refining the generated course outline. Remember, AI Assistant generates content in between steps, but you can always click Stop and go back to return to a previous step. And if you go back to update your input in the previous steps, the global Edit with AI button shows a pulsing blue dot to remind you of the option to regenerate the content. After generating a course draft, you can easily return to the workflow by navigating to the AI Assistant menu on the course overview page and clicking Return to AI Outline. When reviewing your inputs from the Create course with AI view, click on the tabs at the top or use the navigation buttons at the bottom to quickly switch between steps. Need to leave your course drafting process? Don’t worry—AI Assistant will remember your progress during the first three steps and resume where you left off once you come back. Canceling the process while AI Assistant is still creating lesson drafts will also delete lessons that have already been generated. Keep Any Documents Handy As a course author, you probably start gathering assets and reference materials right after choosing a topic and writing an outline. While you can now generate content from scratch using AI, you may also want to create courses based on existing documents. You can import source documents to use as a reference whenever you want to generate new content using AI Assistant. But instead of uploading reference materials each time, you can keep them all in one place by uploading them in the Source content tab of the AI settings window before you start. Access AI settings from the AI Assistant dropdown menu in the upper right. Drag and drop files into the Source content tab or click Choose file to upload them. Supported file types and limitations are listed in the following table. Content Type File Extension File Size Limit Character Limit Portable Document Format .pdf 1 GB 200k Microsoft Word .doc , .docx Microsoft PowerPoint .ppt , .pptx Text .text , .txt Captions .vtt , .srt , .sbv , .sub Storyline 360 .story Audio .mp3 , .wav , .m4a Video .mp4 , .webm , .ogg Website URL — — Tips: For PDF, Word, PowerPoint, and Storyline 360 source documents, AI Assistant only references extractable text. Images, audio, and video are not included. To use an existing Rise 360 course as source content, export the course to PDF, then upload the resulting file. Audio and video files are transcribed and then processed like caption files, so it’s faster if you just upload a caption file. Only text-based content contained in publicly accessible URLs is supported. Website URLs that require authentication, block crawlers, redirect to inaccessible content, or sit behind paywalls will not work. While there’s no hard limit on how many files you can upload to use as source content for AI Assistant, we recommend uploading only what you need for faster processing. If you don’t have entire files as reference, you can also copy and paste content from the source into the text box provided.8.2KViews16likes0CommentsAI Assistant: Producing Highly Realistic Audio
As a course author, you want to do more than just present information—you want to create multi-sensory e-learning experiences that resonate with learners. Using sound creatively can help you get there. AI Assistant’s text-to-speech and sound effects features let you create highly realistic AI-generated voices and sound effects for more immersive and accessible content. Originally, both of these features could only be accessed in Storyline 360. However, as of the July 2025 update, AI Assistant in Rise 360 can generate text-to-speech narration. Visit this user guide to get started creating AI-generated narrations in Rise 360. In Storyline 360, these features can be accessed from the Insert Audio dropdown in the AI Assistant menu within the ribbon. Find them under the Home or Insert tab when you’re in slide view or from the AI Assistant side panel as quick action buttons for added convenience. Bring Narration to Life with AI-generated Voices If you’ve ever used classic text-to-speech, you probably wished the voices sounded less, well, robotic. AI Assistant’s text-to-speech brings narration to life with contextually aware AI-generated voices that sound more natural—and human! Check out the difference in quality between a standard voice, neural voice, and AI-generated voice by playing the text-to-speech examples below. Standard Voice Your browser does not support the audio element. Neural Voice Your browser does not support the audio element. AI-generated Voice Your browser does not support the audio element. To get started, click the Insert Audio icon in the AI Assistant menu to open the Generate AI Audio dialog box. A library of AI-generated voices—which you can filter by Gender, Age, and Accent—displays under the Voices tab. The voices also have descriptions like “deep,” “confident,” “crisp,” “intense,” and “soothing” and categories that can help you determine their ideal use cases, from news broadcasts to meditation, or even ASMR. Find these qualities under the voice’s name, and use the play button to preview the voice. Currently, there are 52 pre-made voices to choose from, and you can mark your favorites by clicking the heart icon. This way, you can easily access your preferred voices without having to scroll through the list. Toggle the View option to Favorites to find all your favorite voices, or In project to see voices used in the current project. Once you’ve decided on a voice, click the button labeled Use to switch to the Text-to-Speech tab. Your chosen voice is already pre-selected. Next, enter your script in the text box provided or click the add from slide notes link to copy notes from your slide. The script can be a maximum of 5,000 characters. For accessibility, leave the Generate closed captions box checked—AI Assistant will generate closed captions automatically. You can instantly determine if your text-to-speech narration has closed captions by the CC label that appears next to each output. Find More Voices in the Voice Library In addition to the pre-made voices, you also have access to an extended voice library with thousands of ultrarealistic, AI-generated voices that can be filtered by age, gender, and use case. Discover the right voice for your content by clicking the Voice Library button on the right under the My Voices tab. Check out this article to learn how to use the voice library. Note: Pre-made voices might lose compatibility and may not work well with newer voice models. Voices in the voice library might also disappear after their removal notice period ends or if the creator chooses to remove them. If this occurs, your generated narration will still play but can no longer be edited. Switch to a different voice if you need to modify an existing narration created with a voice that’s no longer supported. Adjust the Voice Settings Unlike classic text-to-speech, the AI-generated voices in AI Assistant’s text-to-speech can be customized for a tailored voice performance. The Model setting lets you choose from three different options: v3 (beta) - Most expressive, high emotional range, and contextual understanding in over 70 languages. Allows a maximum of 3,000 characters. Note that this model is actively being developed and is available only in Rise 360. Functionalities might change, or you might encounter unexpected behavior as we continue to improve it. For best results, check out some prompting techniques below. Multilingual v2 (default model) - Highly stable and exceptionally accurate lifelike speech with support for 29 languages. Allows a maximum of 10,000 characters. Flash v2.5 - Slightly less stable, but can generate faster with support for 32 languages. Allows a maximum of 40,000 characters. The setting for Stability controls the balance between the voice’s steadiness and randomness. The Similarity setting determines how closely the AI should adhere to the original voice when attempting to replicate it. The defaults are set to 0.50 for the stability slider and 0.75 for the similarity slider, but you can play around with these settings to find the right balance for your content. Additional settings include Style exaggeration, which amplifies the style of the original voice, and Speaker boost, which enhances the similarity between synthesized speech and the voice. Note that if either of those settings is adjusted, generating your speech will take longer. Note: Some voices in the Multilingual v2 model tend to have inconsistent volume—fading out toward the end—when generating lengthy clips. This is a known issue with the underlying model, and our AI subprocessor for text-to-speech is working to address it. In the meantime, we suggest the following workarounds: Use a different voice Switch to the Flash v2.5 model Increase the voice’s stability Manually break your text into smaller chunks to generate shorter clips Do I Need to Use SSML? AI Assistant has limited support for speech synthesis markup language (SSML) because AI-generated voices are designed to understand the relationship between words and adjust delivery accordingly. If you need to manually control pacing, you can add a pause. The most consistent way to do that is by inserting the syntax <break time="1.5s" /> into your script. This creates an exact and natural pause in the speech. For example: With their keen senses <break time="1.5s" /> cats are skilled hunters. Use seconds to describe a break of up to three seconds in length. You can try a simple dash - or em-dash — to insert a brief pause or multiple dashes for a longer pause. Ellipsis ... will also sometimes work to add a pause between words. However, these options may not work consistently, so we recommend using the syntax above for consistency. Just keep in mind that an excessive number of break tags can potentially cause instability. Prompting Techniques for v3 (beta) The v3 (beta) model introduces emotional control via audio tags, enabling voices to laugh, whisper, be sarcastic, or show curiosity, among other options. The following table lists various tags you can use to control vocal delivery and emotional expression, as well as to add background sounds and effects. It also includes some experimental tags for creative uses. Voice and emotion Sounds and effects Experimental [laughs], [laughs harder], [starts laughing], [wheezing] [whispers] [sighs], [exhales] [sarcastic], [curious], [excited], [crying], [snorts], [mischievously] Example: [whispers] Don’t look now, but I think they heard us. [gunshot], [applause], [clapping], [explosion] [swallows], [gulps] Example: [applause] Well, that went better than expected. [explosion] Never mind. [strong X accent] (replace X with desired accent) [sings], [woo] Example: [strong French accent] Zat is not what I ‘ad in mind, non non non. Aside from the audio tags, punctuation also impacts delivery. Ellipses (...) add pauses, capitalization emphasizes specific words or phrases, and standard punctuation mimics natural speech rhythm. For example: “It was VERY successful! … [starts laughing] Can you believe it?” Tips: Use audio tags that match the voice’s personality. A calm, meditative voice won’t shout, and a high-energy voice won’t whisper convincingly. Very short prompts can lead to inconsistent results. For more consistent, focused output, we suggest prompts over 250 characters. Some experimental tags may be less consistent across voices. Test thoroughly before use. Combine multiple tags for complex emotional delivery. Try different combinations to find what works best for your selected voice. The above list is simply a starting point; more effective tags may exist. Experiment with combining emotional states and actions to find what works best for your use case. Use natural speech, proper punctuation, and clear emotional cues to get the best results. Multilingual Voices Expand Your Reach Another compelling benefit of AI-generated text-to-speech is the ability to bridge language gaps, allowing you to connect with international audiences. With support for over 70 languages depending on the model—including some with multiple accents and dialects—AI Assistant’s text-to-speech helps your content resonate with a global audience. All you have to do is type or paste your script in the supported language you want AI Assistant to use. (Even though the voice description notes a specific accent or language, AI Assistant will generate the narration in the language used in your script.) Note that some voices tend to work best with certain accents or languages, so feel free to experiment with different voices to find the best fit for your needs. The table below provides a quick rundown of supported languages. Available in v3 (beta), Multilingual v2, and Flash v2.5: Arabic (Saudi Arabia) Arabic (UAE) Bulgarian Chinese Croatian Czech Danish Dutch English (Australia) English (Canada) English (UK) English (USA) Filipino Finnish French (Canada) French (France) German Greek Hindi Indonesian Italian Japanese Korean Malay Polish Portuguese (Brazil) Portuguese (Portugal) Romanian Russian Slovak Spanish (Mexico) Spanish (Spain) Swedish Tamil Turkish Ukrainian Available in v3 (beta) and Flash v2.5: Hungarian Norwegian Vietnamese Available only in v3 (beta): Afrikaans (afr) Armenian (hye) Assamese (asm) Azerbaijani (aze) Belarusian (bel) Bengali (ben) Bosnian (bos) Catalan (cat) Cebuano (ceb) Chichewa (nya) Estonian (est) Galician (glg) Georgian (kat) Gujarati (guj) Hausa (hau) Hebrew (heb) Icelandic (isl) Irish (gle) Javanese (jav) Kannada (kan) Kazakh (kaz) Kirghiz (kir) Latvian (lav) Lingala (lin) Lithuanian (lit) Luxembourgish (ltz) Macedonian (mkd) Malayalam (mal) Mandarin Chinese (cmn) Marathi (mar) Nepali (nep) Pashto (pus) Persian (fas) Punjabi (pan) Serbian (srp) Sindhi (snd) Slovenian (slv) Somali (som) Swahili (swa) Telugu (tel) Thai (tha) Urdu (urd) Welsh (cym) Create Sound Effects Using Prompts Sound effects that align with your theme and content can highlight important actions or feedback, like clicking a button or choosing a correct answer, offering a more engaging and effective e-learning experience. With AI Assistant’s sound effects, you can now use prompts to easily create nearly any sound imaginable. No more wasting time scouring the web for pre-made sounds that may cost extra! Start creating high-quality sound effects by going to the AI Assistant menu in the ribbon under the Home or Insert tab. Then, click the lower half of the Insert Audio icon, and choose Sound Effects. (You can also access it from the Audio dropdown within the Insert tab. Simply select Sound Effects under the AI Audio option.) In the text box, describe the sound effect you want and choose a duration. You can adjust the Prompt influence slider to give AI Assistant more or less creative license in generating the sound. Since AI Assistant understands natural language, sound effects can be created using anything from a simple prompt like “a single mouse click” to a very complex one that describes multiple sounds or a sequence of sounds in a specific order. Just note you have a maximum of 450 characters to describe the sound you want to generate. Play the following audio samples to listen to sound effects created using a simple prompt and a complex one. Your browser does not support the audio element. Prompt: A single mouse click Your browser does not support the audio element. Prompt: Dogs barking, then lightning strikes You can also adjust the Duration—how long the sound effect plays—up to a maximum of 22 seconds. For example, if your prompt is “barking dog” and you set the duration to 10 seconds, you’ll get continuous barking, but a duration of two seconds is one quick bark. Adjusting the Prompt Influence slider to the right makes AI Assistant strictly adhere to your prompt, while sliding it to the left allows more free interpretation. Pro tip: You can instantly determine if your sound effect has closed captions by the CC label that appears next to each output. Some Pro Terms to Know Using audio terminology—specialized vocabulary that audio experts use in their work—can help improve your prompts and produce even more dynamic sound effects. Here are a few examples: Braam: A deep, resonant, and often distorted bass sound used in media, particularly in trailers, to create a sense of tension, power, or impending doom. Whoosh: A quick, swooshing sound often used to emphasize fast motion, transitions, or dramatic moments. Impact: A sharp, striking noise used to signify a collision, hit, or sudden forceful contact, often to highlight a moment of action or emphasis. Glitch: A short, jarring, and usually digital noise that mimics a malfunction or distortion, commonly used to convey errors. Foley: The process of recreating and recording everyday sound effects like movements and object sounds in sync with the visuals of a film, videos, or other media. Here’s something fun to try! Generate a 3-second sound effect using the prompt “studio quality, sound designed whoosh and braam impact.” Increasing the duration may produce better sound effects but will also create more dead air towards the end. Pro tip: Onomatopoeias—words like “buzz,” “boom,” “click,” and “pop” that imitate natural sounds—are also important sound effects terms. Use them in your prompts to create more realistic sound effects. Video Tutorials Want to learn more before getting started? Check out our video tutorials for additional guidance on using AI Assistant to generate text-to-speech and sound effects. Create AI-generated Text-to-Speech Create AI-generated Sound Effects Articulate 360 Training also has additional video tutorials on using other AI Assistant features. Use AI Assistant features in Rise 360 Use AI Assistant features in Storyline 360 You must be logged in to your Articulate 360 account to watch the videos. Don’t have an account yet? Sign up for a free trial now!18KViews14likes0CommentsDesigning Immersive Phone Conversations in Storyline
Ever have two characters talk in a training module, but it still feels flat; even with speech bubbles, audio, and triggers? This (FREE) Storyline phone conversation template changes that. Whether you're designing for sales, compliance, healthcare, or support, it creates real, layered convos that feel like you're eavesdropping on a call. Animated phone effects Realistic voiceover dialogue Transparent APNG waveforms (way better than GIFs!) Custom triggers for pick-up/end call Clean, modern layout with animated text Watch how it works: https://www.youtube.com/watch?v=kMpUcYJRNnE Preview the demo: https://www.redesignedminds.com/Discuss/story.html Download it free: https://drive.google.com/file/d/19AvmE7q3PAUbXoNKIViQtPNqCwUoFDQW/view?usp=sharing If your training includes a conversation, this is how you bring it to life.622Views10likes14CommentsA Career in Learning Design
This project started with a simple idea — what if you could follow one person's entire learning design career, step by step? "The ID Path” is a character-based experience that follows the fictional career journey of Olivia Wilson, a learning designer whose path begins as a Junior ID Assistant and evolves into a leadership role as a Chief Learning Strategist. The goal is to highlight not just career progression, but also how responsibilities and skills evolve along the way. About the project Viewers can explore five key roles from Olivia’s career using a timeline of circular photo icons. Each click opens a Polaroid-style pop-out layer where Olivia’s portrait is paired with a brief story and three key skills that define that stage. The character pop-out effect is used within each profile layer. Implementation The character and portraits were created using ChatGPT and AI image tools, simulating a consistent persona as she grows across decades. Layout, voiceover, and accent colors were designed to keep the interaction simple, warm, and story-driven. Try the demo Follow Olivia’s journey and explore how her roles shaped who she became. About Me Jayashree Ravi Curious about more e-learning innovations? Connect with me on LinkedIn to share ideas, discuss implementation techniques, or discuss instructional design challenges.How To Embed An ElevenLabs Conversational AI Widget Into SL360 Using JS!
Hi Heroes, It feels like something new and exciting is always around the corner in the world of generative AI technology, and this week ElevenLabs put themselves firmly in the driving seat of the agentic AI revolution with their new Conversational AI toolkit. If you haven't heard of this yet, check out this video which explains it all: https://www.youtube.com/watch?v=v-EYzZCLF48&ab_channel=ElevenLabs The interactive, animated widget that this toolkit provided is easy to embed anywhere, including directly within an Articulate Storyline 360 project slide! If you're interested in how to get started, I've written a blog post that includes all the steps, including an Execute JavaScript snippet you can use to effortlessly get your agent loaded into your activity: https://discoverelearninguk.com/how-to-set-up-elevenlabs-conversational-ai-widget-in-articulate-storyline-360/ I'm also currently experimenting with the API for the new Conversational toolkit to understand how I can implement it into my eLearning Magic Toolkit plugin for Storyline + WordPress, potentially opening the door for developing real-time voice activated automation all within a Storyline-built eLearning activity! Much more to come very soon. 🚀 --- My name's Chris Hodgson, an eLearning developer and software trainer based in the UK. I enjoy creating fun, unique, and engaging online experiences using Articulate software! Connect with me on LinkedIn - https://www.linkedin.com/in/chrishodgson44/750Views6likes7CommentsArticulate AI.....When?
So I have been reviewing numerous posts in relation to Articulate AI launching. No unless I missed something, I just can not seem to find any 'definite' launch date information. September seems to be the month now being talked about. Do we have any official word on this, any actual date? We have a couple of new projects we are keen to trial with the use of Articulate AI.Solved1.4KViews5likes36CommentsAn AI-Powered Knowledge Check in Storyline
I've been wrestling with this challenge for a while: How do we get learners to reflect without relying on quiz after quiz? How can we use open-ended questions to encourage deeper thought? I've long considered AI for this, but there were hurdles... How do you integrate it into an Articulate Storyline course without paying for tokens or setting up contracts? And how do you do it without leaking credentials in the course itself? Can it be done without having to modify code after exporting the course? I learned recently that Hugging Face Transformers provide a solution. You can now download an AI model to the learner's machine and run it locally in their browser. I've managed to get this running reliably in Storyline, and you don't have to modify the code after export! In the final slide of the demo, your goal is to recall as much as possible from the podcast/summary. The AI will then check your response and give you a percentage score based on what you remembered. Live demo & tutorial here: https://insertknowledge.com/building-an-ai-powered-knowledge-check-in-storyline/ If you want to learn how I recommend starting with the sentiment analysis because it's easier to get started. I've also provided a file to download in that tutorial if you want to reverse engineer it.AI Assistant: Writing and Editing Inline Content
Generating content using the power of AI takes just seconds. However, without proper guidance, AI-generated content can be generic or out of step with your writing style and brand voice. Luckily, AI Assistant can be your new writing partner, helping you create and perfect content quickly—while you stay in control of the finished product. Available in Rise 360 and Storyline 360, AI Assistant’s write and edit inline feature allows you to generate or refine content right within your favorite authoring app. In Rise 360, click the Quick Insert icon in a text block with no content and choose the Write with AI option (sparkle icon), or select existing text to bring up a floating toolbar and choose the Edit with AI option. AI Assistant in Rise 360 is context-aware, referencing lesson material surrounding the current block, the target audience defined during course outlining, and source documents to generate relevant and cohesive content. For Storyline 360, access write and edit inline from the AI Assistant menu within the ribbon or click the sparkle icon from the floating toolbar when you select existing text. AI Assistant in Storyline 360 only draws references from general knowledge based on its training data. Ready to learn how to enhance your writing and editing with AI Assistant? Keep reading for tips to help you dive in. Improve Your Writing Beat Writer’s Block Get Prompt Engineering Help Help Learners Understand Complex Topics Improve Your Writing AI Assistant’s write and edit inline feature makes it easy to simplify content for accessibility, apply formatting, highlight key terms, add relevant emojis, and organize content into a list or table. You can also ask AI Assistant to find and address grammar, spelling, and punctuation mistakes—or take advantage of the many other options available in the dropdown menu shown below. Pro tip: Aren’t sure where to start but want to give your writing a quick boost? Click Improve writing and let AI Assistant upgrade your content. Beat Writer’s Block AI Assistant's write and edit inline feature not only helps you refine existing content—it can also generate new content from scratch! It's perfect for those times when you just feel stuck. To get started, click the Quick Insert icon in a text block with no content. Then, choose the Write with AI option to bring out the dropdown menu. Select a prebuilt prompt, type in your desired topic, and press Enter. Continue to enhance the content using custom prompts until everything is perfect. Get Prompt Engineering Help Having difficulty figuring out an AI Assistant prompt to generate just the right image for your course? The write and edit inline feature can help you compose a detailed, effective prompt for image generation. Select your content and click the sparkle icon from the floating toolbar. Enter a custom prompt or select the Create an AI image prompt option from the dropdown menu. AI Assistant will generate a prompt based on the selected content. Once AI Assistant has generated the prompt, click the Copy option to save it onto the clipboard. Help Learners Understand Complex Topics Using AI Assistant’s write and edit inline feature is like having a writing expert at your side, ready to assist you throughout the course creation process. For example, you can prompt AI Assistant to analyze and explain your content or ask for key takeaways that summarize your main points effectively. This helps ensure that learners will also be able to grasp your essential messages. Need to make your content more relatable and engaging? Ask AI Assistant to generate analogies to help learners draw connections between complex ideas and familiar concepts. Or, provide a well-structured scenario. AI Assistant can generate a scenario based on your selected content in seconds. By working through a relatable situation, learners grasp the nuances of the topic and can visualize how these concepts apply to real-world contexts. Note that AI Assistant’s write and edit inline feature doesn’t support media or HTML/CSS styling of inline content. And while Rise 360 supports emojis, tables, and lists within inline content, Storyline 360 only supports lists. Video Tutorials Want to learn more before getting started? Check out our video tutorials for additional guidance on using AI Assistant to write and edit inline content. Write and edit inline content in Rise 360 Write and edit inline content in Storyline 360 Articulate 360 Training also has additional video tutorials on using other AI Assistant features. Use AI Assistant features in Rise 360 Use AI Assistant features in Storyline 360 You must be logged in to your Articulate 360 account to watch the videos. Don’t have an account yet? Sign up for a free trial now!3.5KViews4likes0CommentsInterrogating the Future: An AI Confession
“The suspect knew too much about AI. Or maybe… she just knew how to answer the right questions.” Check out the recorded Pod Cast Here: Interrogating the future How It All Began It started as a simple reflection, ten questions about how AI is shaping my design work. But instead of writing a straight blog, I found myself drawn to something more atmospheric. Something that felt like the process itself, shadowy, uncertain, full of creative tension. So, I turned the reflection into a crime-show-style interrogation, complete with tape recorder hums, flickering lights, and a narrator whose voice demanded answers. The irony? Every part of the production was built with AI. The words, the sound, the visuals, even the interrogation room itself, were all digitally generated and then manually composed by me. Built by AI, Crafted by Hand I started by feeding the ten questions into ChatGPT, but instead of plain responses, I asked for a script. Together, we created a dialogue between a suspicious interrogator and me — a learning designer “accused” of collaborating with Artificial Intelligence. Then came the layers: Voice: generated using AI text-to-speech, giving each character a distinct tone and rhythm. Sound Effects: sourced and blended through AI-assisted sound libraries; tape clicks, fluorescent hums. Images: created with AI image generation and enhanced in Photoshop’s Generative Expand to build the noir interrogation room. Editing: every frame and cue assembled manually — timed to each pause, each flicker, each breath. It wasn’t just automation, it was orchestration. Why Noir? Noir has always been about truth hiding in plain sight. It’s smoky, suspicious, human. And that’s exactly how AI feels right now, part mystery, part revelation. The interrogation format gave me a way to ask the big questions: Is AI saving us time or stealing our craft? Can it really understand empathy, context, and culture? Or is it just pretending well enough to fool us — and our learners? The Real Interrogation Behind the theatrics, the project became a metaphor for the design process itself. Every day, learning designers interrogate ideas: “What’s the story here?” “What does the learner need?” “Is this real, or just noise?” AI doesn’t replace that questioning, it amplifies it. It’s like having an endless brainstorm partner who never sleeps, never stops suggesting, and occasionally hands you brilliance on a platter. The Craft of Collaboration What fascinated me most was the balance. AI built the assets — but I gave them shape. It’s a partnership that works best when humans stay in control of tone, meaning, and emotional truth. “AI gave me the pieces. But I had to make them make sense.” That’s the new creative muscle, knowing when to hand over, when to edit, and when to override. Lessons from the Interrogation Room By the end, I realised the project wasn’t about AI at all, it was about agency. The ability to stay curious, playful, and skeptical, even when technology feels all-knowing. If AI has a role in the future of learning design, it’s not to automate creativity, it’s to augment it. To make space for designers to ask better questions, faster. To amplify storytelling, not silence it. Final Word So yes, I built my own interrogation. I wrote the script with AI. I voiced it with AI. I scored, illustrated, and expanded it with AI. And then I did what no algorithm could: I stitched it all together with intuition, timing, and story sense. Because creativity isn’t about the tools you use. It’s about what you do with them.104Views3likes4CommentsFun Animated Timer for Gamification Projects
Hi Articulate heroes, I wanted to highlight one very fast but fun-looking way to create timers for interactive projects. I've learned about this way from "Gamification Series; 05: Creating Tension with Timers." You can check out these amazing webinars Gamification series, and there was a few different ways to add timers to projects. I used it in my recent project "Cooking Frienzy" - Jeopardy-style cooking-game. (btw you can check out the full game: Cooking Frienzy So, here are the steps: 1) Create / find a "timer" picture - it could be any image with transparent background what works for your theme (in my case it is "Pomodoro" timer, made with AI help, saved as .png ) 2) Add this image as a picture (insert an image). 3) Go to Animation tab 4) Choose Exit Animation - "Wipe", go to "Effect Options" - "From Right". Set the animation timer for whatever time you need (10 sec., 30 sec., 1 min etc.) 5) Set triggers to what will happen after the timer is done (animation completed): i.e. jump to the next slide, show "result-fail" etc. 6) Preview and adjust if needed 🤞206Views3likes5Comments