Ask a Question

I have problem with StreamingAvatarApi.js in library of Heygen

Uncaught TypeError: debugStream is not a function at peerConnection.oniceconnectionstatechange (StreamingAvatarApi.js:533:1) This thing happen everytime even I don't have this function

Error Uploading Consent Video

Overtime we upload a consent via a mobile recording it always return to an error: Please make sure you read out the code correctly. We have done it multiple times and said the passcode clearly on video and it still returns to the same error.

https://api.heygen.com/v2/templates Api returning with empty data

I'm trying to get the template id of my latest template to pass the variable values from the api but I'm not able to get the template id even from the website (Can't copy, permission error, even I'm owner of the workspace) and even not from the api (empty) I'm sending my x-api-key without it get a unauthorized error

Streaming Avatar different voice integration

I want to maintain the current lip-syncing functionality while integrating a third-party voice provider like Elevenlabs and LLM to improve response times. Is this feasible? If so, are there any existing sample implementations? Additionally, I've noticed a 2-4 second delay between receiving the LLM response and the avatar talking. I'm aiming to minimize this latency. Would implementing streaming LLM response or switching to a faster voice provider and LLM help? Alternatively, might deploying the application to the cloud improve performance? For reference, I'm comparing my results to the streaming avatar demo on HeyGen's website, which appears to have faster response times.

Avatar movement control

I have seen that the avatar in the Url to Ads demo has a lot more motions. Are there any plans to add movement commands to the video avatars in general? For example, have it wave as it greets the user. Are there plans to have the Url to Ads feature also accessible via the API? I think this could push the experience to a whole new level. Cheers, KonNinja

Emphasize important information

Hello, I would like to know if and how it is possible to put more emphasize on important keywords in the input text. I would like the avatar to highlight important information within the text. Depending on the voice it can happen that the avatar slurs its speech or hurries through parts it does not quite know how to pronounce i.e. specific names and abbreviations. Is there any way to let the avatar know to put extra emphasize on these predefined words? A use-case would be when the avatar explains a piece of code and stresses the required libraries and version number. I am using the Streaming API. Cheers, KonNinja

Increase audio volume of Streaming API

Hello, some voices are more quiet compared to the rest. I would like to increase their audio volume. What is the best way to achieve this? I am using the Streaming API Demo on Github. Would be appreciated if you could outline this. Cheers, KonNinja

Streaming Avatar: add pause, tone, etc.

With models like bark, you can add decorations to your text in order to give hints to the model on the tone, etc. Is there a way to give this sort of hints (like smiling, happy, serious or the emotions that are available in the voice settings ?)

sometimes getting 400 error and then its giving 200 success on streaming.task api

sometimes getting 400 error and then its giving 200 success on streaming.task api

Streaming.task bad request error

After I start a session correctly, if I write something in the prompt and put either "talk" or "repeat" I get a bad request error 400 on the streaming.task api call. The Open AI API key is not an issue because I am able to perform a console.log on the LLM response