Discussions
Interactive Avatar Creation TypeError
Hi,
API Voice Settings
Hello, I was wondering if it is possible to use the various voice settings in the API for an Avatar?
}
"voice": {
"type": "text",
"voice_id": "{{28.Voice}}",
"input_text": "{{34.result}}",
"model": "turbo_v2.5",
"locale": "default_multilingual",
"stability": 0.5,
"similarity_boost": 0.69,
"style_exaggeration": 0.42,
"speaker_boost": true,
"speed": 1.0,
"pitch": 0.0,
"volume": 1.0
}
}
]
}
Mute Microphone Interactive Avatar SDK
Hi Gokce or Support, Please respond to my previous question, Im still waiting.
How to generate Bulk avatar videos through api
I wanted to generate bulk avatar videos by giving input text in bulk through api. I looked all over the documentation but didn't find any way to do this.
What is x-guest-session-token in the HeyGen API and how do I obtain it ?
I am using the provided API to get knowledge base details: https://api2.heygen.com/v1/streaming/knowledge_base/detail. However, I noticed there is a x-guest-session-token key required in the request header. Can you explain what this is and how I can obtain it?
Interactive Avatar Streaming - use own OpenAI / Chat Service
Hello,
ID = Avatar_ID ??
- Is the id in the response body of the 'Check photo/look generation status' API the avatar_id I need to use for the video creation?
- If it isn't, exactly which API's response contains the avatar_id ?
- If I want to generate a look and it should look exactly the same like in a picture in a avatar group, should i just write in the prompt: 'originallooks_like_in{image_key}' ?
- Is it possible to upload video assets in an avatar group ?
webhook not called
This problem was already issued through the chat windows, but still unansewred. We have a problem that our webhook is not called when video's are created, failed etc.
Sending Audio Directly to HeyGen Avatar
We are currently integrating HeyGen avatars into our platform and would like to explore advanced API capabilities. Specifically, we are looking for a way to send pre-recorded audio directly to the avatar instead of relying solely on text input for speech generation.
The reason behind this request is that the current avatar voice synthesis does not handle Moroccan Darija or Arabic fluently. As a result, when we send Arabic or Darija text, the pronunciation and delivery are not accurate or natural, which affects the user experience.
We would love to know:
Is it possible to send audio input directly via the API for the avatar to lip-sync?
If not, is there a roadmap or plan to support more interactive languages such as Arabic or Darija?
Can we collaborate or request support to help enable these features?
Looking forward to your response and hoping we can find a way to enhance the experience for Arabic-speaking users.
the USER_START and AVATAR_START_TALKING events sometimes fire even though the STREAM_READY event has not
With Streaming Avatar, the USER_START and AVATAR_START_TALKING events sometimes fire even though the STREAM_READY event has not.
Is this by design?