Discussions
Bugfix - Important - Voices changed on HeyGen API
Hello,
FULL mode custom LLM — agent not calling base_url endpoint
I’ve configured FULL mode with a custom LLM via llm_configuration_id. My OpenAI-compatible endpoint works correctly (tested streaming and non-streaming). Sessions create successfully, the agent appears in the LiveKit room, and I see lk.agent.events and lk.transcription — but my endpoint receives zero HTTP requests.
LLM Configuration ID: 7899c80f-fab1-40c8-87f7-e57e5e1bc115
Avatar ID: 0930fd59-c8ad-434d-ad53-b391a1768720
base_url: [My Supabase Edge Function URL -- Happy to share privately]
Session creation body:
Regarding credit rights
If I have 200 credits in my creator membership, how many credits do you give in the monthly pro membership?
Which is correct, the official documentation or the update news?
API Video requests take long time
Why does API requests (under pay as you go) to generate Avatar III videos take so long. For example, my <10s audio with avatar id request takes more than 5-7 mins. Can you suggest improvements or help debug?
Undocumented video status: waiting
We started getting status "waiting" in response to the Get Video Status/Details endpoint. This is not documented on your main API reference page: https://docs.heygen.com/reference/video-status
LiveAvatar Custom AI avatar
I’m looking into the LiveAvatar api for my new app idea and I’ve got a question about making/building my own live avatar.
Feature Request: Subtitle color customization fields in /v2/video/generate
Hi,
How to use photo avatar on model III engine?
I am having a hard time creating videos using avatar 3 model. My workflow is uploading a photo as an asset, but it doesn't seem to have the ID needed to create video using avatar 3 model. What're the api endpoints and workflow if I want to create videos using that model?
rate limits on /v2/video/generate endpoint?
Is any sort of concurrency / batch processing supported ?