Discussions

Ask a Question
Back to All

Knowledge Base -- our own LLM?

Super excited to be building with the streaming Avatar. We want to put some real intelligence into our avatar. What's the best way to do this? Ideally, I'd be able to hook the avatar up to our LLM (e.g. a ChatGPT assistant which we build and train).


Is the right approach to have this happen on the client and then stream speak commands to heygen?

I.e. client -> openai -> client -> heygen?


The challenge here is that we will need to wait for the OpenAI stream to complete before before sending to heygen, or we need to implement our own chunking logic. If this is the correct approach, it would be a lot better to be able to stream to heygen so that you can handle that.