Download
50 TopicsShoot and Destroy blocks Minigame
Interactive Minigame shooting Cannon Balls. The Cannon aims towards the cursor. The Cannon shoots when you click on a block. The Blocks disappear as they are shot. Download the project at the bottom of this post. How to shoot the Cannon Balls This works using JavaScript and Motion Paths. Let's explore how it's done! I created 7 blocks and 7 corrosponding Motion Paths on a picture of a Cannon Ball. You can change the starting point of the animation independently of where the Cannon Ball is located! The actual picture of the Cannon Ball is located off-screen. It also has a "reset" motion path. This is only needed if you add more complex interactivity that we are not going to explore in this demo. Create this trigger to make the Cannon Ball disappear and reset after hitting a block: Create these types of triggers to make the blocks disappear when they are hit: Lastly, create this trigger to shoot the Cannon Ball: Make the Cannon follow the Cursor This is done using simple JavaScript. Create this trigger: and add this code: const rect1 = object('5i3YzB8KKND'); update(() => { const dx = rect1.x + rect1.width / 2 - pointerX(); const dy = rect1.y + rect1.height / 2 - pointerY(); const angle = Math.atan2(dy, dx); rect1.rotation = angle / Math.PI * 180; }); This code can also be found in Articulates own write-up of the built-in JavaScript support in Articulate Storyline. :) You need to paste in the ID of your specific object in the top of the code. You can find it by right clicking on your object on the Canvas of Storyline. That's it, you're done! Now you can add layers, triggers that make things happen, sounds, visual effects etc, text, more blocks(!) etc. :) Download the project file Explore the project by downloading the project file right here. It's created by www.mindsparkelearning.com and you can freely use it as is. :) All assets have been created using simple AI prompts. :)16Views2likes1CommentVideo Game Top-Down with Arrow Key controls
Introduction and features If you have ever wanted to create a world that your users can explore, but haven't known how to do it, then this project can help you get out there and get after it. View introduction in video format here: Video on LinkedIn A full course using multiple characters, multiple boards, side-scrolling animations when moving from board to board, are already being enjoyed by thousands in a professional setting in one of the biggest companies in the world. With this type of course, you have have the user walk around and size of world, gather items, solve tasks and anything you can think of. Review the project here: Review360 You control the Avatar with the Arrow Keys on the keyboard and she can only walk on the paths set by you. That means, she can't walk through walls! You can warp to Task slides or to another Board slide using the Intersect trigger. Download the Project at the bottom of this post. How the character moves The character is a PNG picture that is exactly 200x200 in size. The trick to movement is to also have the Motion Paths be 200px in length and have them set to Relative Start Point. I have chosen an animation time of 0.25 seconds. There are 5 states, the Normal state is the character in the Idle Position. The other 4 states are self-explanatory. Lastly, two triggers are created (NUM 2 appears when you hit Arrow Down on your keyboard). Your character now moves around the Canvas. How the character was created The character was created using AI using the following prompt: I'm creating a game and need a picture of a character. The game has an isometric view and I need the following 4 poses in the same picture with transparent background: 1 Front view where she has an idle pose and standing still. 2 Front view where she is walking towards the camera. 3 Back view where she is walking away from the camera. 4 Left view where she is walking with arms swinging very little. Comically small body. Very large head. Female. 3D style. High resolution. Office clothes. Short hair. Glasses. Transparent background. Disney style. That gave me this character in the poses I asked for. I use this large character for all slides except for the board. On the board, I took the picture into Photoshop and shrunk the body into something even smaller. How the board was made The board was made using PowerPoint and AI. I drew the outline of the board in PowerPoint using square shapes. I guess you could do this in Storyline as well! Then I found random pictures of inventory on google and copy-pasted them into various positions. Then I uploaded the picture into the Copilot AI - any image generation AI will work and used the following prompt: Can you create an exact replica of this image, but make it beautiful 3d top-down style. Make the colors pleasant and give it a disney vibe so that the scene feels warm and welcoming. Keep the layout the same. How to prevent the character from walking through walls This is the part that requires you to click the most with your mouse. First you create a grid of 200x200px square shapes. They only need to cover the areas the character will be walking on. They need to be touching each other and Shape Outline must be turned off - or it won't work. You then type in a unique identifier in each square shape. I just used the numbers from 1-27. I also renamed all the shapes "Square 1" "Square 2" "Square 3" etc. Next you create a new Number variable. The default value should be the square your character will be starting on. In my case, 1. You then create a bunch of triggers that changes the location variable upon intersecting with the character. You can now track exactly where on the board your character is located. This is key to controlling where it can not move. Lastly, you create the following trigger for all four arrow keys. The numbers will be different for you. But basically you look at Square 1 and visualize if the character should be able to move downwards from this location. If it shouldn't be able to, then you don't add the number to the list. If it should be able to, then you add the number. Download the project file Explore all the other various tricks employed to create the soft shadows, the smooth transitions and more by downloading the file right here. :) This project was created by http://www.mindsparkelearning.com - but feel free to modify it and then make it your own! :) I hope someone out there can find this project useful.80Views2likes2Comments🔍 Interactive Magnifier Tool for Image Inspection
Hi everyone, I wanted to share a new experiment I’ve been playing with recently, a custom magnifying-glass inspection tool built with HTML, CSS, and JavaScript, then embedded into Rise 360 using a Storyline block. This idea came from reviewing an AI-generated laboratory image that looked perfectly fine at a distance, but the closer I examined it, the more inconsistencies and small “AI giveaways” I found. That sparked the idea for a scenario-based inspection activity, where learners can zoom in to look for issues, hazards, or clues. 🔧 What it does The interaction uses a hexagonal lens that lets learners: Toggle Inspect Mode on/off Move a magnifying glass across an image Zoom in with the mouse wheel Zoom using keyboard shortcuts (+ and −) Navigate smoothly across very large, detailed visuals It works brilliantly for: Spot-the-issue / observational tasks Quality assurance or audit simulations Safety checks Equipment-familiarisation exercises Any situation where learners must analyse detail 🎨 Customisable If you’d like to adapt it, you can easily modify: The shape of the magnifier The image The zoom strength The toggle button The colours and frame styling I’ll include instructions in the shared example so you can download the code, replace the image, or restyle it however you like. 💡 Why I’m sharing it Like many of you, I love finding ways to push what Rise + Storyline can do together. This tool combines accessibility, usability, and custom code in a way that still fits neatly inside the Articulate ecosystem — no outside hosting needed. Would love to hear any thoughts, suggestions, or creative variations you might come up with! 🎁 If you'd like… I can also generate: A compressed ZIP of the interaction A Storyline .storyfile with everything preconfigured A variation with hotspots that react when the lens passes over them A version that reveals information only when inspecting certain regions An accessibility-first version with keyboard-draggable magnifier Have a play REVIEW360 Shared folder for ZIPS161Views1like4CommentsSanta's Naughty or Nice List
Happy Holidays, everyone! I recently built a sorting exercise for a client which used Javascript to split the learner's text entry answer into individual sentences and display these as draggable Post-It notes. The learner was then asked to sort the Post-It notes into 'good' and 'bad' ideas. Which got me thinking... could Santa use a similar method to decide who's been naughty or nice? In this Christmas-themed version, I've simplified the Javascript to recognise only individual words (by space or line break) and generate up to 10 Post-It notes for the sorting exercise. // Get the learner's response from the Storyline text variable var player = GetPlayer(); var learnerResponse = player.GetVar("Christmas_List"); // Split the text into individual words // This handles multiple spaces, line breaks, and other whitespace var words = learnerResponse.split(/\s+/); // Remove any empty strings from the array words = words.filter(function(word) { return word.trim().length > 0; }); // Assign each word to numbered variables (Person_1, Person_2, etc.) // Limited to 20 slots var maxSlots = 20; for (var i = 0; i < words.length && i < maxSlots; i++) { var varName = "Person_" + (i + 1); var trimmedWord = words[i].trim(); player.SetVar(varName, trimmedWord); } // Optional: Store the total number of words processed player.SetVar("TotalWords", Math.min(words.length, maxSlots)); Each of the 10 Post-It notes only becomes visible if a name has been assigned to it. Depending on the split between 'naughty' and 'nice', you'll see a different video message from Santa. I created the Santa videos with Powtoon's AI Text to Video tool. Warning: it's Drag and Drop As fun as this is, Drag and Drop isn't fully accessible, and I'm still tinkering with the master file to see if I can add keyboard fallback controls and preserve the counting system. Watch this space! Play here!
98Views1like0CommentsInteractive Team Introduction Training Module
What I Built I designed an interactive onboarding experience in Articulate Storyline 360 for new employees joining the training department. Instead of a static team flow chart, the final product is a set of interactive team introduction slides. Each team member is represented with: 📸 A photo 🏷️ Their name and title displayed at the bottom 📝 A detailed introduction revealed directly on their photo Navigation is intuitive, with Next and Previous buttons (enhanced with tooltips), and an Information button at the end to credit the Storyline 360 photo library. Behind-the-Scenes Process I started with a simple flow chart to show team structure. Realized it felt too static, so I reimagined it as an interactive experience. Built individual slides for each team member, layering text over images for clarity. Added navigation controls to make the module feel smooth and user-friendly. Incorporated credits to acknowledge the resources used. Purposeful Design The goal was to help new employees quickly connect with their team. Instead of just seeing names on a chart, they now get a personalized introduction to each colleague. This design solves the need for: Better engagement during onboarding A more human, approachable way to meet the team A reusable template that can be updated as the team grows Explore & Review You can explore the interactive module yourself here: 👉 Review Link – Interactive Team Intro Example For those who want to learn from or adapt this build, here’s the source file attached. Cheers! JANI75Views2likes2CommentsStoryline Instant ToolTip
Hi everyone! I'm sharing a script to add tooltips in Storyline. Just one reference object, a quick copy-paste, and it's ready. You could use a native rollover state, but this gives you full control over the animation. Position, colors, shadow: all configurable in 10 seconds. No JavaScript skills needed, the script is designed to be easy to use and maintain.69Views1like1CommentAdd Audio Note-Taking to Your Storyline Projects!
How it works: It’s straightforward: When the learner clicks "Record," a browser popup will appear. They must allow microphone access for the script to start. Once they are done and hit Stop, the audio file is immediately downloaded to their device. 💡 Tech Note: The script generates a .WAV file. Why? Because it’s supported natively by browsers. Exporting to MP3 would require injecting complex external libraries, which makes integration much harder. Keeping it simple and robust is key here! ⚠️ Important Constraint: This script works perfectly in Preview mode or once hosted on your LMS/Web Server or local server. However, it will NOT work in Articulate Review 360. The platform restricts access to the microphone and camera for security reasons. So, make sure to test it in a real environment!162Views2likes1Comment💡 Confidence Self-Check: A Reflective Benchmark Tool UPDATED 151125 - See comments below!! 👇
Hi everyone, UPDATED 151125 - See comments below!! 👇 Here’s a quick show-and-tell example I’ve been experimenting with — a Confidence Self-Check tool built in Storyline 360 and embedded into Rise 360 as a formative reflection block. The goal was to give learners a way to benchmark their confidence and awareness before and after a session, helping them see their own progress and prompting metacognitive reflection — without the need for LMS data capture. I wanted something that: ✅ Supports metacognition — helping learners think about their own learning 🔄 Tracks progress with “before” and “after” self-checks 🧠 Encourages reflection rather than testing knowledge 💬 Uses local storage only (no data collection) to keep it private and learner-centred 💻 How it was created This build was produced through an iterative Generative AI-assisted workflow, where I coached an AI (ChatGPT – GPT-5) step-by-step through design reasoning, JavaScript development, accessibility checks, and instructional alignment. The focus was on human-assured prompting — using AI to accelerate build logic while maintaining learning design intent, tone, and pedagogy. The project was inspired by JoeDey’s “Perpetual Notepad” (huge kudos for the original concept!), and extended to include weighted confidence scoring, dual checkpoints, and adaptive feedback messages. ⚙️ Known limitation Because this tool is designed to be session-specific, each new deployment requires: Updating the SESSION_LABEL and STORAGE_PREFIX variables in the JavaScript to give that session its own ID. Editing the question text to match the focus of that session. These edit points are clearly marked in the script with: >>> EDIT SESSION METADATA HERE <<< and >>> EDIT QUESTIONS FOR THIS SESSION HERE <<< It’s a simple one-minute change — but worth noting if you plan to scale this across multiple modules or courses. You can explore the working example here: 👉 Rise Review link A downloadable .story file is included inside the review for anyone who wants to look under the hood, edit the JavaScript, or adapt the design for their own learners. 💬 Open for feedback I’d love to hear from other e-learning designers — especially anyone experimenting with AI-supported authoring or reflective learner tools. How might you extend or refine this concept? I’d love your thoughts or suggestions — particularly around: How you’d extend this for different learner profiles Ideas for alternative feedback messages or visual treatments Whether you’ve built similar “confidence meter” interactions in your own work Feel free to reuse, remix, or expand the concept. Always happy to connect and collaborate with other learning designers! 🔗 Portfolio: forgedframeworks.co.uk/ 📧 Contact: dan.boyland@forgeframeworks.co.uk Thanks in advance for any feedback, and again, credit to Joe Dey and the Articulate community for sharing the foundation idea that made this possible.368Views3likes7CommentsUpdated "Reveal" codes
I’ve been experimenting with the original HTML code blocks included in Articulate 360’s built-in examples and wanted to share how far you can extend that base structure using GenAI to iterate and refine interactions. Starting with the default image-reveal index provided by Articulate, I used GenAI to progressively develop three new versions. I supplied my own images, created meaningful alternative text for screen readers, and introduced additional UX and accessibility improvements. Every version is fully tailorable if you want to adapt the formatting, colours, spacing or behaviour. The three examples are: Enhanced Image Reveal Grid Uses the original Articulate structure. Adds a hover zoom, a click-to-zoom state, and high-contrast purple letter tiles for accessibility. Fanned “Deck of Cards” Flip Interaction A dynamic fanned layout, more like a real card hand. Cards lift and reveal their letter on hover, flip on click, and reset if clicked again. Includes chevron navigation for easier cycling. Plain Flip Grid with Navigation A clean, accessible flip-card grid with navigation chevrons. Mirrors the deck behaviour but with a simplified layout. All three examples are linked below, along with the downloadable files. If you have suggestions, improvements or alternative approaches, I’d really love the feedback. And if you’d like to use or remix any part of this, feel free — I’d love to see what you create with it. Review360 Zip files GDrive location: HERE370Views6likes5Comments