Discussions
Pronunciation Dictionary
Hi Team,
AI Studio not creating videos past 40%
I've made several videos with no problems now they all stop at various places from 26% to 41% but will not complete.
Optimizing response time in Streaming Avatar SDK with Vite
I am developing an interactive avatar project using the Streaming Avatar SDK with Vite 6.0.6 and TypeScript 5.6.3. I have implemented an advanced system with the following functionalities:
What part(s) the streaming SDK, the Vite and Typescript project allows you to get the transcript of the user or the avatar
So I am trying to integrate with the SDK to be able to send audio data directly to the avatar just so it can respond as though we're having a call. And I love how it is already
Webhook events for a team
How can we listen to webhook events for all users in Heygen Team or enterprise instead of a single user?
Realtime avatar domain specific faillure
I'm encountering an issue with streamingAvatar.startVoiceChat()
using the @heygen/streaming-avatar@^2.0.12
SDK (or the latest 2.x version). The call to createStartAvatar
completes successfully, and the STREAM_READY
event fires for the video stream. However, the subsequent await streamingAvatar.startVoiceChat()
call either hangs indefinitely (observed in Chrome) or fails with an explicit WebSocket connection error to wss://api.heygen.com/v1/ws/streaming.chat
(observed in Chrome developer logs). The microphone does not activate.
simulate webhook event
Is there anyway to simulate/receive webhook event locally?
n8n interactive avatar
i have The workflows but seems imposible to make It works . I put The JavaScript on my web and The html but when i typeof i always get undefined
Interactive avatar can't hear me
I have embedded my Heygen Interactive avatar on my wix website and it loads and opens with the first phrase but then does not respond to any verbal input. Microphone is on and allowed in the code.
Issue with V2 Streaming API: Avatar Speaks Default Content Instead of /task Text
We are currently integrating your V2 Streaming API (using REST API calls and LiveKit for media) via a Node-RED backend orchestrator, and we're encountering an issue where the avatar is not speaking the text provided via the /v1/streaming.task endpoint. Instead, it seems to be reverting to a default conversational script For example when the prompt says "Your name is Aida, and you are a friendly assistant. Start every conversation with a robot joke" - instead of starting off with a joke, the avatar June_HR_public opens up with "That was a funny joke. I am June with HeyGen...". So its clearly taking these inputs as "user" input where it should be "assistant" text she is suposed to say.