Discussions
How to find out which voice I used in avatar video
I need to confirm which voice I used in avatar videos. How do I do this?
How can i apply brand pronounciations when using a template through API
I'm automating my workflow with n8n and would love to use my brand pronounciations instead of preprocessing it through n8n. I created a list of 80 business words / acronyms that need to be pronounced correctly, otherwise it sounds wierd.
use ElevenLabs v3 on API
Я настраиваю голос во вкладке voice дальше копирую его айди и отправляю запрос через API, но при генерации через API он генерирует не так, не правильно. Почему? как в запросе API указать что бы голос использовался через ElevenLabs v3
Transparent background on v2/generate
Hello,
Encounter "[Errno 111] Connection refused" when trying to use video translation
I was trying to use video translation, it showed "[Errno 111] Connection refused" after clicking "Translate"
I am getting this issue while testing my lambda through API.
{"code":40099,"message":"Something is wrong, please contact [email protected]"}
Video APi Gen IV background issue
I need to generate arounbd 200 video like this https://app.heygen.com/videos/a114277b149149e79e2939fe44f03478
LMNT Voie Provider
I imported the audio from LMNT but it gives an unavailable error message.
Error 400112 "Unauthorized" when using /v2/video/generate with X-Api-Key
Hi, I’m trying to generate a video through the API from n8n using the endpoint:
POST https://api.heygen.com/v2/video/generate
Best docs for OpenAI Realtime websockets integration
For integration with openai realtime where I send audio to drive an interactive avatar, should I use this https://docs.heygen.com/reference/audio-to-video-api , or is there some kind of LiveAvatar replacement? I don't see any technical details on the LiveAvatar docs. it does refer to livekit but that doesn't say anything in particular about interactive avatars.
