Discussions
Getting default Heygen LLM response from Avatar
Hi Team,
I am having my custom LLM. I am integrating Speech-To-Speech conversation with my LLM.
On very first, I need my avatar to tell my custom welcome message. For that, am using this:
await avatar.startVoiceChat({ useSilencePrompt: false });
const welcomeMessage = "Hi There! Welcome! How can I assist you?";
await avatar.speak({
text: welcomeMessage,
taskType: TaskType.REPEAT,
taskMode: TaskMode.SYNC,
});
After avatar speaking the welcome message, I have started asking my question.
const handleVoiceMessage = async (message) => {
const avatar = avatarRef.current;
if (!avatar) return;
if (message && message.detail) {
const userMessage = message.detail.message;
addToTranscript("user", userMessage);
const data = await getLLMResponse(userMessage);
if (data && data.bot) {
await avatar.speak({
text: data.bot,
taskType: TaskType.REPEAT,
taskMode: TaskMode.SYNC,
});
} else {
console.error("No bot response received.");
}
}
};
Before getting the LLM response, avatar is started speaking HeyGen's LLM response. For example: "Sorry! could not understand your question".
Once LLM response came, then it started speaking the response (i,e) expected answer from my LLM.
For getting my users's voice input, am using the below code:
avatar.on(StreamingEvents.USER_TALKING_MESSAGE, (message) => {
console.log("user talking message", message)
handleVoiceMessage(message);
});
As I went through the discussions on the same issue, you suggest to use our own speech-to-text (SST) solution to transcribe the user's input.
Are you mean to say that we cannot use your event method (USER_TALKING_MESSAGE) for getting the user's input?
If yes, please tell me an alternate solution.
If no, please tell me what is wrong with my code.