Discussions

Ask a Question
Back to All

How to use a custom LLM and prompt with the Interactive Avatar API?

In the demo repo the readme says "In the initialMessages parameter, you can replace the content of the 'system' message with whatever 'knowledge base' or context that you would like the GPT-4o model to reply to the user's input with."

source: https://github.com/HeyGen-Official/InteractiveAvatarNextJSDemo?tab=readme-ov-file#how-does-the-integration-with-openai--chatgpt-work


Well, in the project I can't find anywhere the param initialMessages.

Also, this is now the demo makes the avatar speak:

await avatar.current
      .speak({ text: text, sessionId: data?.session_id! })
      .catch((e) => {
        setDebug(e.message);
      });

It uses directly the input message of the user, it doesn't use the OpenAI Route as mentioned in the Readme:

"In this demo, we are calling the Chat Completions API from OpenAI in order to come up with some response to user input."


But the route is nowhere called, and the Avatar also answers me without even setting my Openai key in .env.


So how can I just tell the avatar to say exactly what I want, instead of relying on HeyGens internal LLM inference that must be somehow in place if it works to answer without my OpenAI key.