Forum Discussion
Meet Your New Co-Presenter: What I Learned Building with AI Avatars in Rise 360
If you've been following along on E-Learning Heroes lately, you've probably noticed AI Avatars starting to pop up in people's Rise 360 projects. I decided to take it for a real spin — not just a quick test, but an actual microlearning build — and I want to share what I found. The good, the genuinely useful, and the "wish I'd known that before I hit publish."
The result is a five-slide microlearning called Meet Your New Co-Presenter, built entirely in Rise 360 with AI Avatars as the hook. Here's what I learned along the way.
What AI Avatars actually does
In case you haven't tried it yet: AI Avatars is a native Rise 360 feature that generates a lifelike on-screen presenter from a written script. You type what you want the avatar to say, choose a presenter style, and Rise generates a speaking video — no camera, no studio, no scheduling a subject matter expert who keeps canceling on you.
It lives inside Rise as a Media Block, which means it drops cleanly into your lesson flow just like any other media element. The generated video includes Rise's native player controls automatically, so learners can pause, rewind, and replay without any extra setup on your end.
For certain use cases — quick scenario illustrations, course introductions, process overviews — it's a genuinely useful addition to the toolkit. And the speed of it is real. From script to generated avatar, you're talking minutes, not days.
What worked well
The hook moment is powerful. There's something genuinely compelling about opening a course with a speaking presenter that you built from a single paragraph of text. For the first slide of this microlearning, I used an AI Avatar to deliver the hook line — "What if your next training course already had a presenter?" — and then immediately showed learners the exact script that produced it. The meta moment landed exactly the way I hoped.
Script-first thinking is actually good instructional design practice. Because the avatar reads exactly what you write, you're forced to write the way people talk, not the way people write slide decks. Short sentences. Active voice. A conversational rhythm. Those are things we should be doing anyway — AI Avatars just enforces it.
Iteration is fast. Don't like how something sounds? Rewrite a sentence, regenerate, and you've got a new version in a couple of minutes. No reshoots. No re-recording. No waiting on anyone. That speed genuinely changes how you think about revision.
Avatar selection is a real design decision — and a good one. The range of avatar styles available gives you enough variety to be intentional about representation. Choosing a presenter who reflects your learner audience is a small decision with real impact on engagement and trust.
You can build a custom avatar — and it's easier than you think
Here's the part that genuinely surprised me, and that I think more IDs need to know about: you're not limited to the preset avatar options. You can create a custom AI Avatar using a plain-language prompt describing exactly who you want your presenter to be.
For this microlearning, I created a custom avatar from scratch using this prompt:
A professional woman in her 50s with long, straight auburn-brown hair and bangs. She has light skin, green-hazel eyes, and a warm, confident smile. She is wearing a black blazer over a black turtleneck. She is standing in a modern professional office environment with soft, neutral lighting. Her expression is approachable and knowledgeable — like a trusted subject matter expert presenting to colleagues. She faces the camera directly with a poised, engaging demeanor.
That's it. One paragraph. And what came back was a polished, realistic-looking presenter that felt like she belonged in the course — not a generic stock character, but someone with a specific look, a specific energy, and a specific professional context.
Think about what that means for your work. If you're building for a specific industry, you can describe a presenter who looks like they actually work in that field. If you have brand guidelines around representation, you can build a presenter who reflects them precisely. If your learner audience is specific — frontline healthcare workers, financial advisors, retail managers — you can describe a presenter who looks like a trusted colleague rather than a generic talking head.
A few tips for writing custom avatar prompts that get good results:
Be specific about appearance. Hair color and style, eye color, approximate age, skin tone — the more detail you give, the more consistent and intentional the result. Vague prompts produce vague avatars.
Describe the setting, not just the person. The background matters more than you'd think. "Modern professional office with soft neutral lighting" reads completely differently than "bright open-plan workspace" or "clinical white background." The environment signals context to your learner before the avatar says a single word.
Name the energy, not just the look. Phrases like "approachable and knowledgeable," "warm but authoritative," or "confident and direct" genuinely influence how the avatar presents. Think of it like writing a casting note for an actor.
Iterate the prompt, not just the script. If the first result isn't quite right, adjust the prompt description and regenerate. Small tweaks — changing "business casual" to "blazer and turtleneck," for example — can shift the result meaningfully.
The custom avatar prompt is, in my view, one of the most underrated parts of this feature. The ability to describe your ideal presenter and actually get them — without a casting call, a shoot day, or a post-production budget — is genuinely remarkable. Use it intentionally.
What to know before you dive in
Here's where I want to be honest with you, because the E-Learning Heroes community deserves the real version, not the marketing version.
You cannot edit the generated video. This is the big one. Once Rise generates your avatar video, what you get is what you get. There's no trim tool. No way to cut a section. No way to splice takes together.
I ran into this firsthand: for some reason, my avatar was still visibly moving — still talking — at the very end of the clip, even after the script had ended. It looked awkward. And there was nothing I could do about it. No way to cut those last few seconds. I actually left that version in the microlearning intentionally so you can see exactly what I mean — go ahead and watch through to the end of the first slide and you'll catch it. The only fix was to rewrite the script, add a cleaner closing line, and regenerate the whole thing and hope the new version ended more gracefully.
If you're used to working with tools like Camtasia or even a basic video editor where you can trim a clip to the frame, this limitation will feel significant. Plan your scripts carefully on the front end — because you can't fix it on the back end.
You cannot export the avatar video out of Rise. The generated video lives inside Rise and that's where it stays. You can't download it, bring it into another tool for editing, or repurpose it elsewhere. If you were hoping to grab the file and use it in a Storyline project or a standalone MP4, that's not currently possible.
The Media Block is the only home for AI Avatars. This has layout implications. Because AI Avatars only lives in a Rise Media Block, you don't have the layout flexibility to, say, place the avatar side by side with text content in the same block. I originally designed this microlearning with a two-column layout — avatar on the left, script context on the right — but had to rethink it when I realized the Media Block doesn't support that kind of custom positioning. I ended up stacking the avatar above the Code Block content, which actually worked fine once I reframed it: the avatar speaks first, then learners look down and see the script that produced it.
Flexibility is limited overall. Beyond the editing and export limitations, you're working within whatever Rise generates. You can influence the output through your script, but you can't control pacing, emphasis, pause timing, or the avatar's gestural behavior in any granular way. What you see is what the AI decided to do with your text.
How I worked around the limitations
A few things that helped:
Write for endings. Since you can't trim, your last line matters a lot. Write a clean, definitive closing sentence that gives the avatar a natural stopping point. Something like "Let's take a look at how it works" worked better for me than an open-ended line that trailed off.
Keep scripts short and focused. The shorter the script, the easier it is to regenerate if something doesn't land right. I kept every avatar script in this microlearning to 20–30 seconds. Fast iteration is only an advantage if you're not regenerating a two-minute monologue every time.
Use the avatar strategically, not on every slide. I originally planned to include an AI Avatar on each of the five slides. After working through the layout constraints and the Media Block limitation, I scaled back to just the first slide. That was actually the right call — one strong hook moment is more impactful than a talking head on every screen, and it kept the rest of the microlearning moving at a good pace.
Lean into Code Blocks for layout control. Since the avatar lives in a Media Block and I needed more visual control over the surrounding content, I built each slide's content layout as a custom Code Block. This gave me full control over typography, spacing, animations, and interactivity — things Rise's native blocks don't always offer. If you're comfortable with HTML, CSS, and JavaScript, Code Blocks are where you can push Rise further than it's designed to go.
The bottom line
AI Avatars is a genuinely useful feature — with real constraints. It's fast, it's accessible, and it removes a meaningful barrier for IDs who want a human presence in their courses but don't have the budget, equipment, or scheduling flexibility for recorded video.
But go in clear-eyed. You're trading editorial control for speed and convenience. If your workflow depends on being able to trim, cut, export, or precisely control the output, you'll run into walls. If you can write a tight script, embrace iteration, and design around the tool's constraints rather than against them — you'll find a lot to like.
I'd love to hear how others are using AI Avatars in their own projects — especially if you've found creative workarounds I haven't thought of yet. Drop your thoughts in the comments below.
Related Content
- 2 years ago