Discussions

Ask a Question

Error talking to AI

I cloned the github. Pasted my open ai key, it keeps erroring with Error talking to AI.

How to use development multiple environments

I want to test in multiple environments (dev and QA) and realised that I will need to reuse our HeyGen API key. This means that I need to register multiple Webhook endpoints. Each environment will have their own secret produced by the webhook API. If we start scaling in production (imagine thousands of calls), that means that all the requests to production will be hitting my dev and QA as well. That will start getting expensive for no good reason. Is there a way to include the endpoint_id in my requests to <https://api.heygen.com/v2/video/generate> so that your servers won't duplicate your webhook results? Or if we have a paid account, do we have access to multiple API keys so that we can split out the environments?

Streaming avatar api with trial token

I'm planning to develop with Streaming avatar api, and I'm testing it using trial token. I made the quality parameter high, but is this avatar 1080p supported?

{ "code": 500, "message": "something wrong in server" }

I'm using the streaming avatar and when i try to send a task it gives me this error { "code": 500, "message": "something wrong in server" } everything seems fine so I'm not sure what's causing it

I have problem with StreamingAvatarApi.js in library of Heygen

Uncaught TypeError: debugStream is not a function at peerConnection.oniceconnectionstatechange (StreamingAvatarApi.js:533:1) This thing happen everytime even I don't have this function

Error Uploading Consent Video

Overtime we upload a consent via a mobile recording it always return to an error: Please make sure you read out the code correctly. We have done it multiple times and said the passcode clearly on video and it still returns to the same error.

https://api.heygen.com/v2/templates Api returning with empty data

I'm trying to get the template id of my latest template to pass the variable values from the api but I'm not able to get the template id even from the website (Can't copy, permission error, even I'm owner of the workspace) and even not from the api (empty) I'm sending my x-api-key without it get a unauthorized error

Streaming Avatar different voice integration

I want to maintain the current lip-syncing functionality while integrating a third-party voice provider like Elevenlabs and LLM to improve response times. Is this feasible? If so, are there any existing sample implementations? Additionally, I've noticed a 2-4 second delay between receiving the LLM response and the avatar talking. I'm aiming to minimize this latency. Would implementing streaming LLM response or switching to a faster voice provider and LLM help? Alternatively, might deploying the application to the cloud improve performance? For reference, I'm comparing my results to the streaming avatar demo on HeyGen's website, which appears to have faster response times.

Avatar movement control

I have seen that the avatar in the Url to Ads demo has a lot more motions. Are there any plans to add movement commands to the video avatars in general? For example, have it wave as it greets the user. Are there plans to have the Url to Ads feature also accessible via the API? I think this could push the experience to a whole new level. Cheers, KonNinja

Emphasize important information

Hello, I would like to know if and how it is possible to put more emphasize on important keywords in the input text. I would like the avatar to highlight important information within the text. Depending on the voice it can happen that the avatar slurs its speech or hurries through parts it does not quite know how to pronounce i.e. specific names and abbreviations. Is there any way to let the avatar know to put extra emphasize on these predefined words? A use-case would be when the avatar explains a piece of code and stresses the required libraries and version number. I am using the Streaming API. Cheers, KonNinja