We've helped a number of Pipecat users hook into a variety of content moderation systems or use LLMs as judges.
The most common approach is to use a `ParallelPipeline` to evaluate the output of the LLM at the same time as the TTS inference is running, then to cancel the output and call a function if a moderation condition is triggered.
Other people have written custom frame processors to make use of the content moderation scoring in the Google and Azure APIs.
If you're interested in building a Pipecat integration for your employer's tech, happy to support that. Feel free to DM me on Twitter.
I saw there was a mention of content moderation when the author discussed https://github.com/pipecat-ai/pipecat
But when I went to the github repo, I didn't see anything about that.
I'm loosely related to the content moderation space through my employer, so wanted to learn more about that.