Anthropic has released Haiku 4.5. This smallest model offers similar performance to Sonnet 4. It costs one-third the price and is twice as fast. The company will use it by default in free plans, reducing server loads and opening up new AI applications.
How does Haiku 4.5 perform?
Anthropic supports its claims with benchmark results. Haiku achieved a 73% score in the SWE-Bench verified test and a 41% success rate in Terminal-Bench. These scores are below Sonnet 4.5. However, Sonnet 4 competes with GPT-5 and Gemini 2.5. It also delivered similar results in tool usage, computational tasks, and visual reasoning tests. Its lightweight design is noteworthy.

Haiku 4.5 is becoming the default model for all free Anthropic plans. The company prefers this model for its free AI products because it minimizes server loads while providing significant capabilities. Its lightweight design allows multiple Haiku agents to run in parallel. It also allows for integration with advanced models. This approach reduces costs and increases access.
Anthropic CPO Mike Krieger says that Haiku opens up new categories in production environments. “While Sonnet handles complex planning, Haiku quickly launches sub-agents,” he says. This creates a complete agent toolbox. Each model provides a balance of intelligence, speed, and cost. Krieger emphasizes that this innovation is transforming AI production because, for the first time, such deployments are possible.
The most urgent applications are seen in software development tools. Claude Code is already widely used, and latency is critical here. Zencoder CEO Andrew Filev states that Haiku 4.5 “opens up entirely new use cases.” This model accelerates developers and also holds potential in other industries.
So, what are your thoughts on Haiku 4.5? How will this innovation impact the AI world? Share your thoughts with us in the comments!