G
Screenshots
About
Low-latency inference for VLM, TTS, and STT with orchestration for making realtime apps. Realtime VLM, TTS, and STT orchestration for making realtime AI apps that can do anything.
Features
- Do anything
- Ai Video Editor
- Ai Voiceover & Dubbing
Pros
- Low-latency inference supports real-time VLM, TTS, and STT apps.
- Built-in orchestration helps coordinate multimodal pipelines for realtime experiences.
- API-first approach fits custom video, voiceover, and dubbing workflows.
