Try kimi k2.5, the strongest open-source model for visual coding. 3 days agochina’s moonshot ai, which is backed by the likes of alibaba and hongshan (formerly sequoia china), today released a new open source model, kimi k2.5, which understands text,. Everything you need to know moonshot’s kimi k2.5 is the new leading open weights model, now closer than ever to the frontier - with only openai, anthropic and google models.
Kimi-k2.5 description kimi k2.5 is an open-source, native multimodal agentic model built through continual pretraining on approximately 15 trillion mixed visual and text tokens atop kimi-k2-base. 1 day agomoonshot ai’s kimi k2.5 reddit ama revealed why the powerful open-weight model is hard to run, plus new details on agent swarms, rl scaling, and identity drift. Discover kimi ai, your intelligent agent for research, writing, coding, and more.
With kimi k2’s open agentic intelligence and long-context capabilities, kimi ai redefines what’s possible in multimodal,. 2 days agomoonshot ai’s kimi k2.5 just scored 50.2% on humanity’s last exam— 18.2 percentage points ahead of claude opus 4.5’s 32.0% —while costing roughly 1/8th the price through efficient. 3 days agokimi k2.5 is a next-generation open-source multimodal model for agentic reasoning, vision, and large-scale execution.
Built on architectural and training upgrades over kimi k2, it significantly. 2 days agokimi k2.5, an open source model with a 262k context window, helps you ship code faster with accurate refactoring and tests. 3 days agoalthough kimi 2.5 is a massive mixture-of-experts (moe) model, thanks to ollama ’s ecosystem support, we can now easily deploy its quantized version on personal computers.