DeepSeek R2 Faces Another Delay: Where to Ask AI Questions Instead

DeepSeek R2 Faces Another Delay

DeepSeek’s highly anticipated R2 model is facing another delay, after High-Flyer failed to use Chinese Huawei Ascend chips for training. R2 was originally scheduled to be released in May 2025, but it has been postponed multiple times.

Now, High-Flyer will have to get more NVIDIA GPUs, and try to do so when demand is much higher than supply. This is good for NVDA investors, but probably bad for everyone else. Here’s everything you need to know about the current state of affairs.

What is DeepSeek R2?

Fundamentally, DeepSeek R2 is a next generation reasoning model from a Chinese developer High-Flyer.

DeepSeek models are known for their excellent performance and being easy to run, while High-Flyer has a reputation for consistently exceeding expectations. We don’t have a lot of concrete information about R2 specs, bt we know that it’s being trained on a much larger set of data than its predecessor.

By the way, if you need a place to ask AI questions about R2 or want to chat with its predecessors — R1 and V3, you can do this on Overchat AI. Learn more here.

DeepSeek R1 was released in January 2025. It was able to match GPT-4 performance but used much less computing power. This is important for people who run AI models on their own computers.

There are rumors that R2 will be much smarter but not require a lot of computing power. This means that people with older computers will be able to use the top-level AI model in their local setups. (You can’t run models like GPT-5 or Claude 4 on everyday hardware, even if they were open source, which they are not).

The impressive thing about the R family is that while OpenAI, Claude, and Gemini were trying to scale their models linearly, essentially throwing more and more data and compute and money at the problem, the team behind DeepSeek looked for ways to optimize performance in clever ways — and they succeeded, creating a model that is equally as smart — if not smarter — without the same amount of resources at their disposal — and then made it open source.

DeepSeek R2 Delayed Again

The Financial Times reported that DeepSeek has had ongoing technical problems when trying to train R2 on Huawei’s Ascend chips. The problems are so severe that Huawei has sent a team of engineers to DeepSeek’s offices to help with development. But even with this help, DeepSeek still hasn’t had a successful training run on the Ascend hardware.

The company has had to settle on a strange solution: using NVIDIA chips for training while trying to make the model work with Huawei’s chips for inference.

Looking at the big picture, this goes against the whole point of China’s plan to make chips in their own country, since training is where most of the computing happens.

For us regular users, this just means more waiting with no idea when it will be released.

What Caused the Delay?

There are three main reasons:

First, Huawei’s Ascend chips have more technical limits than expected. Huawei says the Ascend 910B is as good as NVIDIA’s A100, but in reality, it’s not. The chips have trouble handling the parallel processing needs of large language models. This causes frequent crashes and memory management issues. As a result, it’s almost impossible to complete full training runs.

Second, the quality of the data is a big problem. DeepSeek CEO Liang Wenfeng is still not happy with R2’s performance. This is partly because the training data available in China is not as good as the global data used for R1. Creating high-quality Chinese language training data from the beginning is more challenging than expected.

Third, political pressure has put DeepSeek in a very difficult position. After R1’s release in January, Chinese authorities asked the company to use Huawei’s processors instead of NVIDIA’s systems. This week, the Cyberspace Administration of China asked major tech companies like Tencent, ByteDance, and Baidu to explain why they bought NVIDIA’s H20 chips.

When Is DeepSeek R2 Coming Out?

Unfortunately, nobody knows — not even DeepSeek.

In June, Reuters reported that CEO Liang Wenfeng hasn’t set a launch date because he is still unhappy with how the model is performing. But at least we know that they won’t release the model until they are happy with the performance, so it’s safe to assume that when it does come out, it will be a significant step up from R1 (GPT-4.5, we’re looking at you.)

Chinese media reports say the model will be released “in the coming weeks.” But similar predictions have been made since May.

Where to Access DeepSeek R2 After It Comes Out?

When R2 eventually launches, you’ll be able to ask this AI model on Overchat AI, as well as — through DeepSeek’s existing API platform and web interface at chat[.]deepseek[.]com.

The company has historically maintained open access to its models, offering both free tiers and paid API access at competitive rates.

Integration with popular Chinese platforms like WeChat and DingTalk is expected shortly after launch.

For international users, access will depend on geopolitical considerations and potential export restrictions. DeepSeek has previously made its models available globally, but increasing U.S.-China tensions could complicate international distribution.

Bottom Line

DeepSeek tried to train R2 on Huawei chips, which seems to have failed, and now the model is facing another delay as it will have to be trained on NVIDIA’s hardware after all. This is a problem, because availability of NVIDIA GPUs is very scarce, and it’s unclear when DeepSeek will receive the amount of GPUs they need. Meanwhile, R2 reportedly needs exponentially more processing power than its predecessor.

While I would have liked to finish the article on a positive note, It looks like it will be a little while before we can chat with R2.

Robert Simpson is a seasoned ED Tech blog writer with a passion for bridging the gap between education and technology. With years of experience and a deep appreciation for the transformative power of digital tools in learning, Robert brings a unique blend of expertise and enthusiasm to the world of educational technology. Robert's writing is driven by a commitment to making complex tech topics accessible and relevant to educators, students, and tech enthusiasts alike. His articles aim to empower readers with insights, strategies, and resources to navigate the ever-evolving landscape of ED Tech. As a dedicated advocate for the integration of technology in education, Robert is on a mission to inspire and inform. Join him on his journey of exploration, discovery, and innovation in the field of educational technology, and discover how it can enhance the way we learn, teach, and engage with knowledge. Through his words, Robert aims to facilitate a brighter future for education in the digital age.