Discussions
How to Dynamically Trigger Interactive Avatar Speech Using External LLM (OpenAI) in Angular + FastAPI Stack?
Hi HeyGen team and community đź‘‹,
We’re building an AI assistant using our own LLM setup (OpenAI/Gemini) and are trying to integrate HeyGen’s Interactive Avatar for dynamic real-time conversations.
⸻
đź’» Our Tech Stack:
• Frontend: Angular (TypeScript)
• Backend: FastAPI (Python)
• LLM: OpenAI/Gemini (responses via WebSocket)
• TTS (optional): ElevenLabs
• Goal: When the user asks a question, we send it to our LLM and want the Interactive avatar to speak that response dynamically — similar to what’s seen in your Next.js demo repo.
⸻
🔍 What We’ve Tried:
We reviewed the speakRequest endpoint, and we understand that currently, the interactive avatar’s text cannot be dynamically updated after a session starts — and must be reinitialized.
However, your demo clearly shows the interactive avatar speaking new responses per user input. We’d like to replicate this.
⸻
🙏 Questions:
1. In your Next.js demo, are you using speakRequest under the hood after each message?
2. Does the demo destroy and reinit the session silently on each message? If so, can this approach be applied using a custom stack (Angular + FastAPI)?
3. Is there any official way (or best practice) to pass dynamic external text (OpenAI response) to the Interactive avatar after load — or reinit efficiently?
4. Any full-stack sample or SDK pattern for non-Next.js frameworks like Angular?
- How can we integrate Our backend Response to HeyGen on frontend. (We are unable to identify how to send and get back the user input responses via custom LLM to HeyGen Interactive Avatar)
⸻
Any help, sample code, or suggestions from the community would be amazing. We want to create a seamless conversational experience just like your demo.