Discussions
Help Needed: Displaying LLM Text Response Using Streaming API Integration with LiveKit
Hello HeyGen Support Team 👋,
Thank you for the helpful information regarding the AVATAR_TALKING_MESSAGE
event from the Streaming Avatar SDK.
However, I’d like to clarify that I’m not using the Streaming Avatar SDK in my project. I’m currently using the Streaming API Integration with LiveKit, based entirely on HTML and JavaScript. My setup is working perfectly for rendering avatar video and audio — there are no errors, and the avatar streams successfully using LiveKit.
What I would like to implement now is an additional feature: displaying the LLM-generated text response (i.e., the content the avatar is speaking) in my chat UI, synchronized with the voice response.
Since I'm using the Streaming API directly (not the SDK), I’d appreciate guidance on:
- Where or how I can access the LLM-generated response in the WebSocket messages.
- Which message type or field contains the spoken text so I can capture and render it.
This feature is important for accessibility and chat history purposes.
Please let me know how I can best achieve this using the Streaming API + LiveKit setup.
Thank you for your support!
Best regards,
Arindam Das