When you use a platform like HeyGen, Synthesia, TikTok, Meta or Zoom to create your digital twin, who owns your likeness? What can that company do with your image or your voice? Consider the stats:
– Over 100 million people used Lensa AI for digital avatars in 2023
– MetaHuman Creator reports over 100,000 digital humans created
– 13% of organizations implementing IoT projects are using digital twins
In fact, the global digital twin market size is projected to reach USD 110.1 billion by 2028, growing at a compound annual growth rate (CAGR) of 61.3%.
In this new podcast episode, Dr. David Mitchell and I discuss the benefits and risks of creating your own avatar or digital twin. We also discuss the No Fakes Act and its potential impact on the digital twin market, as well as some of the celebrities who are embracing their own avatar creation.
Plus, we chat about some strategic uses of ChatGPT and the impact AI is having on PR, specifically crisis communications how brands can begin protecting their content with watermarks and training their spokespeople on how to talk about AI.
You can listen to the audio here.
Or, check out our new podcast set and watch the video.
Transcript:
Here’s the transcript with the speakers reversed and without time codes:
Transcript
Jennifer Jones-Mitchell: Welcome to the AI Central Podcast, brought to you by Human-Driven AI. That’s right! The Human-Driven AI podcast is now called AI Central. We’ve got a new name, a new set, and a whole lot of new tech to talk about. I’m Jennifer Jones-Mitchell.
David: And I’m Dr. David Mitchell.
Jennifer: Let’s get into it. So, today, honey, what I want to talk about is avatars because it seems like right now every AI company, every tech company, is pushing for people to create their own avatar or their own digital twin. Synthesia, Meta, Zoom, TikTok—they’re all trying to get us to do this.
David: Yes!
Jennifer: And I don’t know if you remember, but I actually predicted this back in—I guess it was November or December of 2023—when we went downtown, and I presented at that conference, DigiMarcon.
David: DigiMarcon, yep.
Jennifer: I told that audience that very, very soon we will all have digital twins of ourselves. I called them avatars back then, but they’re digital twins. And I remember someone in the audience asking, “Why on earth would we want to do that?”
David: Right.
Jennifer: And I said, “Well, I only speak English. My avatar, my digital twin, could speak every language.” And then you have the advantage of content creation. Of course, it takes a lot of time to do the hair and makeup and get everything ready for video shoots and stuff, so to be able to have a digital twin to do a talking head video would be very helpful.
David: Yeah, it really would. Seriously, it’d save a lot of time. You wouldn’t have to set up all these lights, hang stuff on the walls, and set up the cameras.
Jennifer: Exactly. Much easier.
David: So much easier.
Jennifer: But on the flip side of that, I’ve been a little hesitant to do it, as you know.
David: Right.
Jennifer: For our audience’s sake—our listeners’ sake—I did start to create an avatar of myself on Synthesia. If you do create this, be prepared to do several takes. We both started creating one, right?
David: Yeah, I did two, actually.
Jennifer: And we ran into issues. Learn from us, people! So, first of all, I’m a naturally smiling person. I just smile—that’s who I am. When you record your avatar, you’re in front of a green screen, and you’re reading a script. I was smiling during it, so my first avatar looked insane.
David: (Laughs)
Jennifer: She was just smiling earnestly at you—just crazy. Then I overcorrected on the second one, and she looked really, really angry.
David: And then I had the height issue, right?
Jennifer: Yes.
David: So, they put the script at the top of the screen, and I was trying to read it. I wear bifocals, so I was kind of tilting my head like this to see the words. My avatar ended up doing this weird bobbing motion the whole time.
Jennifer: (Laughs)
David: I had to stack some books to get the height right and then focus on not squinting.
Jennifer: Yeah, I had to try to keep my eyes open and stay level. It’s tricky!
David: It’s harder than you think to get the perfect take.
Jennifer: It really is. Well, I haven’t released or created the final version of my avatar because it occurred to me that I’m going to create this digital twin with these companies—some of whom are owned by China, or Chinese-owned, I should say.
David: Right.
Jennifer: So, I took all the terms and conditions and fed them into ChatGPT because I wanted to know: am I handing over my likeness to these companies?
David: Good.
Jennifer: It’s interesting. I actually have some of the terms and conditions here. I will say that HeyGen and Synthesia originally had terms that said they would own the license, but they’ve since changed that. I fed all the terms and conditions into ChatGPT and asked, “Who owns it? What’s the risk here?”
David: What did you find?
Jennifer: Meta says—quote—“While you retain ownership of your avatar or digital twin, you are also granting Meta a non-exclusive, transferable, sublicensable, royalty-free, worldwide license to use it.”
David: Whoa!
Jennifer: I mean, the language alone makes your head spin. Basically, it says Meta can use your likeness.
David: That’s wild.
Jennifer: And TikTok? Their terms are similar. They essentially say you’re granting them a broad license to use, reproduce, and distribute your digital twin, including any likeness featured in your video. TikTok can use it across their entire platform and services.
David: Wow.
Jennifer: The others—HeyGen and Synthesia—now have an opt-in system. You can opt in to allow them to use your likeness or avatar for training purposes and things like that. But right now, Meta and TikTok are the ones I would stay away from.
David: That reminds me of the No Fakes Act in Congress.
Jennifer: Oh yeah, right!
David: I wrote a blog post or two about it for the Atlanta School of Commedia. Basically, the act says no one can use your likeness, sound, voice, or music without your permission.
Jennifer: But it hasn’t passed yet.
David: Right. That’s why these companies are locking people into contracts now—before the law passes.
Jennifer: Sneaky.
David: Very.
Jennifer: If you’re looking into creating a digital twin or an avatar, I get it—I want to do it myself. But make sure you look at the terms and conditions and pay attention to the permissions. Sometimes they opt you in automatically, and you have to opt out.
David: That’s so true.
Jennifer: And with Gen Z, I think avatars might actually help with some of their workplace challenges.
Jennifer: We were talking the other day about all the struggles Gen Z is having in the workplace. I read a story where IBM is teaching classes to help them learn how to have conversations and make eye contact.
David: Wow, really?
Jennifer: Yeah, but it makes sense. This is a generation that grew up fully in a digital world.
David: And they were isolated during the pandemic, too. That’s a big thing—those are key developmental years.
Jennifer: Exactly. Our own kiddo is more comfortable in the Discord metaverse as an avatar. They don’t like making phone calls at all.
David: Not at all.
Jennifer: It’s funny because I remember when I was running a PR agency, and even before that when I was a group leader at some agencies, we were trained on managing Millennials. Back then, the issue was that Millennials didn’t like making phone calls.
David: That’s wild.
Jennifer: I don’t know if our listeners know, but my background is in PR. When I first started working in PR, we literally called it “smile and dial.” You would call up reporters, pitch stories, and smile while you did it—even though they couldn’t see you. I could not, for the life of me, get Millennials to pick up the phone and call a reporter. They just wouldn’t do it.
David: Wow.
Jennifer: Gen Z is even more reluctant. I wonder if this professional push to use avatars, especially on Zoom and in virtual meetings, is something Gen Z will embrace wholeheartedly.
David: It seems like they might.
Jennifer: I’m obviously painting broad strokes here because I know some Gen Zers who are excellent in meetings. Like Laura’s son Cameron—he doesn’t fit the mold. But I do think there’s a significant subset of this generation that struggles with workplace communication. Avatars might make it easier for them, particularly on Zoom.
David: Right. But, as with anything AI-related, I think people need to approach this carefully. That’s why I like your idea of putting the terms and conditions into a chatbot or ChatGPT and asking specific questions.
Jennifer: Exactly. You can have it summarize the terms or ask, “Does this say they own my likeness?” or “What rights am I granting?” It’s powerful. It is, but you have to be specific. If you just ask for a summary, sometimes it’s as long as the original terms. I prefer to ask direct questions like, “Who will own my likeness?”
David: That’s a good point. Otherwise, you end up with a summary that’s just as overwhelming as the terms and conditions themselves.
Jennifer: Exactly. I think it’s also worth noting how celebrities are licensing their own avatars now.
David: That’s right.
Jennifer: They’re creating digital twins and licensing them for very specific and controlled uses, like commercials.
David: It’s a way for them to make money without even showing up.
Jennifer: Right. They’re even licensing their voices for voiceovers in commercials, which is funny because voiceover gigs are usually pretty easy for actors. And now AI can handle even that.
David: I read an article about an upcoming Tom Hanks movie called Here. They’re using AI to recreate a young Tom Hanks.
Jennifer: Really?
David: Yes. They hired an AI company to scan old footage of him, and the movie will show him at various stages of his life. The technology is so advanced that the head of the AI company was able to show a reporter a live demo. The reporter sat in front of a screen, spoke into it, and saw Tom Hanks’ face mimicking their speech in real time.
Jennifer: That’s insane.
David: Seamless, instantaneous, and perfect. This isn’t like Polar Express, where the animation fell into the uncanny valley.
Jennifer: Oh, yeah. That movie was creepy.
David: So creepy.
Jennifer: The characters looked almost real but weren’t breathing, and their eyes were lifeless.
David: This is going to be much better, though. it’s Tom Hanks acting and saying new things in a new film, but they’ll show him as he looked at different ages.
Jennifer: That’s wild.
David: You could theoretically take another actor, even one who’s passed away, and recreate them using this technology.
Jennifer: Wow. It’s going to be huge for Hollywood, but I wonder about the impact on up-and-coming talent.
David: It could create fewer opportunities for new actors, especially if studios can just use a younger Tom Hanks instead of hiring someone new.
Jennifer: Exactly. Who could compete with Tom Hanks?
David: Nobody.
Jennifer: Especially with that scene in Money Pit where the bathtub falls through the floor.
David: (Laughs)
Jennifer: That scene makes me laugh every time. If you’re having a bad day, watch that clip. The movie is terrible, but that scene where the tub falls through and Tom Hanks just starts laughing hysterically? It’s gold.
David: That’s a good tip.
Jennifer: Thanks. But seriously, this technology is fascinating. Celebrities have this opportunity to license themselves, but it’s not without risks.
David: Right.
Jennifer: I saw a story about middle school boys creating deepfake porn of their female classmates.
David: That’s horrifying.
Jennifer: Beyond devastating for those girls. The internet is forever, so those fake videos will haunt them for the rest of their lives.
David: That’s why we need laws like the No Fakes Act.
Jennifer: Agreed. When do you think it will pass?
David: It’s making its way through Congress, but it might not happen until after the next administration.
Jennifer: Hopefully soon. Europe seems to be ahead of us in passing these kinds of laws.
David: Yes, Europe is racing ahead, and the U.S. is playing catch-up.
Jennifer: We need to protect people because this technology is so powerful and can be used for good or for harm.
David: Exactly.
Jennifer: I also think brands need to start watermarking their content.
David: Yes, that’s key.
Jennifer: Watermarking ensures there’s a source of truth for your branded content, which will be essential as deepfakes become more prevalent.
David: Definitely.
Jennifer: And in PR, brands will need to train their spokespeople to talk about AI issues and monitor for brand attacks, like fake reviews or manipulated content.
David: If you don’t deal with AI now, you’ll have to later.
Jennifer: Exactly. It’s like the old adage: “Those who fail to plan, plan to fail.”
David: So true.
Jennifer: I think that’s a good place to wrap it up.
David: Agreed.
Jennifer: We hope you like the new set and the new name, AI Central. We’re going to get back to publishing episodes regularly, so thank you for listening, and we’ll see you next time!
Remember, AI won’t take your job. Someone who knows how to use AI will. Upskilling your team today, ensures success tomorrow. Customized in-person and virtual team trainings are available. Or, schedule a discovery call for customized AI consulting, including product innovation and a comprehensive strategic roadmap boost your competitive advantage with AI.
Spring Cleaning Your AI: Resetting How You Work
AI isn’t getting harder; you’re just not structured for it. Here’s how to reset your workflow, organize your AI work, and stop starting over.
Human Driven AI Announces Katherine Morales as VP, Human + AI Operations & Governance
Katherine Morales, APR, is named VP, Human + AI Operations & Governance, a role focused on helping clients turning AI into scalable systems.
Redefining the Human Role in AI Systems
Human-led AI requires more than “human-in-the-loop.” Learn how clear accountability, ownership, and workflow design enable responsible AI leadership as autonomy increases.
