Discussions

Ask a Question

In heygen if multiple user login at same time api takes more time fetch

example, i am streaming.new to pending for more than 5 minutes and streaming.start and streaming.ice getting 200 status but after that streaming.task api is not working for the call itself not happening there.

Can you tell me how to set variables using an upgraded v2 template?

![](https://files.readme.io/b10d2d3-image.png) ![](https://files.readme.io/8ca349a-image.png)

While generating the template, we are able to create the template. But while fetching the template variables using below: https://api.heygen.com/v2/template/{template_id}. It is giving invalid parameter and no variable set. IT was working fine yesterday 3 pm IST and since then we are facing the issue.

While generating the template, we are able to create the template. But while fetching the template variables using below: <https://api.heygen.com/v2/template/{template_id}>. It is giving invalid parameter and no variable set. IT was working fine yesterday 3 pm IST and since then we are facing the issue.

Error: "timeout of 120000ms exceeded" at line 1

Hi, could you please let me know what this error means? How will I be able to finisch my 3 min video (purchased business mode). I am in a bit of a rush to create a video for an upcoming meeting to promote HeyGen at our organisation. It would be great get some help quickly. Thanks, Heike

Some More HeyGen API-related Questions

Hello again, HeyGen team! I have a few more questions to clarify. I would appreciate your help on these! 1. I encountered a problem related to data validation for a particular variable. Since most variables are responsible for some text on the background, it is very important that the entered text does not go beyond the container allocated to it. This is not a problem when the text is specified through your own editor, since I, as a user, immediately see the entered text and can adjust the text itself, block position, font size, boldness, etc., to avoid one text block overlapping another. But if I I set the values of variables when creating a video through the API, this is not possible, since apart from the type of the variable I do not have any additional information about it. "scene_1_sub_title": { "name": "scene_1_sub_title", "type": "text", "properties": { "content": "" } }, In this regard, the question is, is it possible to somehow set the default text for a variable (when adding a variable via the dashboard) so that it would be available in these variables in the response from “Get Template V2”? If not, is there anything like this planned in the future? Ideally, it would be great to be able to explicitly set the minimum and maximum text length for each variable, but if this is not possible, then the presence of default content would allow me to calculate at least the maximum text length. 2. I also have a question regarding the avatars and voices used to create the template. I noticed that for some public templates in the editing mode via the dashboard, the message “Previous voice option not available. Please select another one” is displayed in the voice selection section. As far as I understand, this can happen if a previously selected avatar or voice has been deleted and is no longer available for use. In this regard, there are a number of questions: 2.1 Could a similar situation happen with templates created by me and that are available through the public API endpoint "List Templates V2"? 2.2 Do I understand correctly that this problem can occur not only with avatar and voice, but also with other elements (fonts, icons, images, etc.)? 2.3 Is it possible to get information that the template is not valid (contains elements that are no longer available) through public API endpoints? 2.4 Will I get any error when getting a template by ID ("Get Template V2") if it contains elements that are no longer available? 2.5 What happens if I try to generate a video via "Generate from Template" or "Generate from Template V2" using a template that contains an element that is no longer available (for example, a voiceover)? Will the API throw some kind of error? Thanks in advance for your help! Regards, Andrii

Streaming Avatar - data: { code: 10013, message: 'avatar not allow' }

I am trying to configure the avatar streaming API in real time using nodejs as the back interface, the same after many trial and error now it reflects avatar not allowed const axios = require('axios'); const { apiavatar } = require('../config/server'); require('dotenv').config(); const statusElement = \[] let sessionInfo = null; let peerConnection = null; function updateStatus(statusElement, message) { statusElement.innerHTML += message + '<br>'; statusElement.scrollTop = statusElement.scrollHeight; } updateStatus(statusElement, 'Please click the new button to create the stream first.'); function onMessage(event) { const message = event.data; console.log('Received message:', message); } // Create a new WebRTC session when clicking the "New" button async function createNewSession() { updateStatus(statusElement, 'Creating new session... please wait'); ``` const avatar = avatarName.value; const voice = voiceID.value; // call the new interface to get the server's offer SDP and ICE server to create a new RTCPeerConnection sessionInfo = await newSession('high', avatar, voice); const { sdp: serverSdp, ice_servers2: iceServers } = sessionInfo; // Create a new RTCPeerConnection peerConnection = new RTCPeerConnection({ iceServers: iceServers }); // When ICE candidate is available, send to the server peerConnection.onicecandidate = ({ candidate }) => { console.log('Received ICE candidate:', candidate); if (candidate) { handleICE(sessionInfo.session_id, candidate.toJSON()); } }; // When ICE connection state changes, display the new state peerConnection.oniceconnectionstatechange = (event) => { updateStatus( statusElement, `ICE connection state changed to: ${peerConnection.iceConnectionState}`, ); }; // When audio and video streams are received, display them in the video element peerConnection.ontrack = (event) => { console.log('Received the track'); if (event.track.kind === 'audio' || event.track.kind === 'video') { mediaElement.srcObject = event.streams[0]; } }; // When receiving a message, display it in the status element peerConnection.ondatachannel = (event) => { const dataChannel = event.channel; dataChannel.onmessage = onMessage; }; // Set server's SDP as remote description const remoteDescription = new RTCSessionDescription(serverSdp); await peerConnection.setRemoteDescription(remoteDescription); updateStatus(statusElement, 'Session creation completed'); updateStatus(statusElement, 'Now.You can click the start button to start the stream'); ``` } // Start session and display audio and video when clicking the "Start" button async function startAndDisplaySession() { if (!sessionInfo) { updateStatus(statusElement, 'Please create a connection first'); return; } ``` updateStatus(statusElement, 'Starting session... please wait'); // Create and set local SDP description const localDescription = await peerConnection.createAnswer(); await peerConnection.setLocalDescription(localDescription); // Start session await startSession(sessionInfo.session_id, localDescription); updateStatus(statusElement, 'Session started successfully'); ``` } module.exports = async (req, res) => { // try { // Configurar la solicitud para iniciar una nueva transmisión const newStreamOptions = { method: 'POST', url: `${apiavatar}/v1/streaming.new`, headers: { accept: 'application/json', 'x-api-key': process.env.apikey }, data: { quality: process.env.quality, avatar_name: process.env.avatar_Name, voice: { voice_id:process.env.avatar_Voice, }, } }; // Enviar la solicitud para iniciar una nueva transmisión const responseNew = await axios(newStreamOptions); ```Text nodejs // Extraer session_id y sdp de la respuesta de la nueva transmisión const dataStream = responseNew.data.data; const session_id = dataStream.session_id; const sdp = { type: dataStream.sdp.type, sdp: dataStream.sdp.sdp }; const response = await fetch(`${apiavatar}/v1/streaming.start`, { method: 'POST', headers: { 'Content-Type': 'application/json', 'X-Api-Key': process.env.apikey, }, body: JSON.stringify({ session_id, sdp }), }); if (response.status === 500) { console.error('Server error'); updateStatus( statusElement, 'Server Error. Please ask the staff if the service has been turned on', ); throw new Error('Server error'); } else { const data = await response.json(); res.status(200).send(data); } } ``` If I load the scripts directly for its operation it gives me an error with the front implementation that I am using nextjs

Browser support

Our digital human test on Safari does not show. Do we support macbook's Safari browser or ios's browser device?

Real Time Avatar API: is it possible to stream in H264 instead of webm/VP8?

Hey Team HeyGen, I would like to incorporate your magical realtime avatar livestream into an iOS app. Unfortunately apple doesn't support the VP8 codec. Is there a possibility to change the stream to a H264 codec? Would be amazing if that was a possibility! Many thanks!! Robin

Building Talking Avatar demo on Next.js - Getting error 400 bad request on StartSession Function

Hello, I'm trying to run the example on a version of Next.js. I'm able to obtain the required information in exactly the following order (based on the example and documentation): I upload the image and receive the ID of the new image (I successfully receive the response or "talk_photo"). I open the session with "NewSession" and pass it the parameters of the image and quality (I successfully receive the response). I then pass the session response (the first two operations are executed on the server-side) to the client. From there, as the next step, I create the RTCPeerConnection and pass the ice_servers (this step is successful). I then create the descriptions, both remote and local first because I need them to generate the onicecandidate (they are also generated successfully). Once I have the descriptions, I call the StartSession function (because according to the documentation, I need to invoke it before calling "realtime ice"). When debugging in the browser, I see that I successfully pass the parameters session_id and sdp. However, this step (Start Session) fails. I get a 400 Bad Request response. I reproduced the same steps in Insomnia with the parameters, and at this point, I always get a Bad Request with "Invalid request json body." Regarding the ice, I also noticed that I successfully pass the candidates and the sessionID, but it never generates the video because, according to the documentation, I need the response from the StartSession function. Could someone provide an idea of what might be happening? I also can provide the Id(If need it which does not belows to this account), more info about the code can be found also on <https://stackoverflow.com/questions/77781071/heygen-talking-avatar-demo-migration-to-next-js-getting-400-bad-request-on-sta>

Method to differentiate streaming videos.

I'm currently trying to stream three sentences with a 1 to 1.5-second interval using the `talk text` function. For instance, when I call it in the order of "A," "B," "C," I want to differentiate the streaming videos for "A," "B," and "C." In the demo code, it distinguishes them using `report.bytesReceived`, but this method isn't clear for precise differentiation. If you have a better approach, I would appreciate it if you could share it with me. (This is intended to display subtitles corresponding to each video.)