I've been experimenting with TypingMind, a web-based wrapper for various LLMs, and recreated parts of my Claude setup with third-party apps at a much higher speed.
I started playing around with TypingMind, hosted on Groq, with a custom plugin, and achieved fast inference with Kimi K2.
I’ll talk about this more in depth in Monday’s episode of AppStories
A video demonstration shows Kimi K2 hosted on Groq, showcasing the improved speed.
Author's summary: Exploring AI experiments with Groq and third-party tools.