Discussions

Ask a Question
Back to All

Avoid duplicate identical avatar lipsync request for the same transparent/opaque UGC video

We were hoping to use HeyGen for personalized videos from available UGC avatars in the dashboard.

We would like to generate transparent avatar lipsync so we can put our own content that the ugc avatar is saying, and then mix it with video of the avatar talking in their actual environments (non-transparent)

Now, in the docs we there's an api endpoint v2/video/generate to generate regular non-trasparent talking videos and v1/video.webm endpoint to generate avatar that has transparent background. We could generate the exact same text for both endpoints and merge the two videos, but that would needlessly generate the same thing twice and but we don't really need the lipsync from the non-transparent avatar generation anyways. It would be sufficient to just generate lipsyncing transparent avatar and for scenes when we want it to be in their natural environment we could just merge it with the avatar's source environment.

Is something like this possible via HeyGen? How would we be able to approach this using your api? As far as i understand, the recorded avatar movements are the same, just their faces are being lipsynced. Is it possible to get avatar's original un-lipsynced full video, so we can then use the transparent avatar video generation and merge it for our needs?