Artificial Intelligence

DeepSeek R1-0528 Launches as Open Source Rival to OpenAI o3 and Google Gemini 2.5 Pro

DeepSeek has unveiled R1-0528, a major open-source AI model update that rivals OpenAI o3 and Google Gemini 2.5 Pro in reasoning, coding, and reliability, with new features for developers.

2 min read
DeepSeekOpen Source AILanguage ModelsOpenAIGoogle GeminiMachine Learning
DeepSeek R1-0528 Launches as Open Source Rival to OpenAI o3 and Google Gemini 2.5 Pro

DeepSeek has released R1-0528, a significant update to its open-source reasoning AI model, positioning it as a formidable challenger to proprietary models like OpenAI o3 and Google Gemini 2.5 Pro. Building on the momentum from its initial January release, DeepSeek aims to deliver advanced reasoning capabilities and enhanced developer features—all under the permissive MIT License.

Major Performance Improvements

The R1-0528 update brings notable advancements in handling complex reasoning tasks across math, science, business, and programming. According to DeepSeek, these improvements are the result of increased computational resources and algorithmic optimizations during post-training. The model now demonstrates:

  • 87.5% accuracy on the AIME 2025 test, up from 70% in the previous version
  • 73.3% accuracy on the LiveCodeBench coding dataset, up from 63.5%
  • More than double the performance on the challenging “Humanity’s Last Exam” benchmark

These gains bring DeepSeek-R1-0528 closer to the performance of leading paid models, while remaining accessible and open source.

Enhanced Features for Developers

DeepSeek-R1-0528 introduces several new features to improve usability and integration:

  • JSON output and function calling support for easier application integration
  • Refined front-end capabilities for smoother user interactions
  • Reduced hallucination rates, resulting in more reliable and consistent outputs
  • System prompts now streamline deployment, eliminating the need for special tokens to activate advanced reasoning modes

Developers can access the model weights via Hugging Face and find detailed documentation on DeepSeek’s GitHub repository. API users will automatically benefit from the update at no extra cost.

Smaller Variants for Broader Access

Recognizing the need for accessible AI, DeepSeek has also released a distilled variant, DeepSeek-R1-0528-Qwen3-8B. This smaller model is designed for those with limited hardware resources, requiring as little as 16 GB of GPU memory for full-precision operation. It achieves state-of-the-art results among open-source models on key benchmarks, making it suitable for both academic and industrial applications.

Community and Developer Reactions

The AI community has responded enthusiastically to the update. Developers and influencers have praised R1-0528’s coding abilities and its near-parity with top-tier models. One developer noted its ability to generate clean, functional code on the first attempt, while others highlighted its rapid progress toward matching the performance of OpenAI o3 and Gemini 2.5 Pro.

Looking Ahead

The release of DeepSeek-R1-0528 underscores DeepSeek’s commitment to open-source innovation in AI. By combining benchmark-leading performance, practical features, and a permissive license, DeepSeek is empowering developers, researchers, and businesses to leverage cutting-edge language model technology without the barriers of proprietary systems.

Related Articles