Discussions

Ask a Question

Is LiveAvatar backend now sending event types that SDK 0.0.11 doesn't support?

After the API update announced 20th March (Third-Party Integrations, Custom LLM/TTS), our LiveAvatar integration stopped working.
What we're seeing:

Avatar V for Digital Twins though an API Call

I saw that Avatar V is now available for Digital twins. How can I use it through the API?
I cannot find Avatar V in the API Docs.

Avatar "Reverse Body Movement" after 30 Seconds

The Avatars (Digital Twins) I use all have about 3 minutes of Footage that I uploaded.

Native 9:16 Portrait Support and Best Practices for Dynamic Backgrounds

Hi HeyGen Engineering Team,

Issue with LiveAvatar avatars not starting

Hello,HeyGen

Answered

Cancellation Endpoint + Stuck API Jobs Consuming Credits

Hi team, I’m integrating HeyGen into a project workflow and have run into a critical issue that I need clarification on.

Answered

Avatar awkward gestures when audio is silent

I am using the API to which I send audio files to generate a speaking video (Avatar created using image), but the issue is that when I add some silence in the audio, or when I just send a silent audio, instead of the avatar remaining still, it keeps showing awkward movements as if it is speaking but with the mouth closed. I would like to avoid that and have it so that the avatar remain still with natural small movements during those silent moments instead of acting like a weird mime. Is there a way to do that ?

Answered

Using Mirror Voice functionality via API

Hi Team,

Answered

9:16 bug

When I create a photo avatar via the API using a 9:16 photo (generated in Nana Banan 2, also 9:16) and then generate a talking video through your API with aspect_ratio set to 9:16, the output video has white borders of a few pixels on each side. This is very noticeable and I cannot find any way to remove them. Could you explain why this happens and how to fix it?

Answered

why the Video Agent would suddenly change the avatar and voice i

We successfully generated approximately 20 videos using the /v1/video_agent/generate endpoint with consistent avatar and voice. Then without any changes to our code or prompt, subsequent videos started using completely different avatars and voices. We did not change the prompt or any parameters between the consistent videos and the inconsistent ones.
Can you explain why the Video Agent would suddenly change the avatar and voice it selects mid-batch? Is there a way to lock a specific avatar and voice in the Video Agent endpoint so it stays consistent across all videos? Or is there a session/context that resets after a certain number of videos?