July 8, 2025 – Shenzhen, China
Huawei’s Noah’s Ark Lab has issued a firm denial to allegations that the recently launched Pangu Pro Moe large-scale language model could be a copy of Alibaba’s Qwen 2.5-14B model. The accusations surfaced after an unnamed AI research group, HonestAGI, HonestAGI published a GitHub analysis that suggested that Pangu Pro Moe showed “extraordinary correlation” with Alibaba’s open-source Qwen model.
Huawei publicly stated at the end of last week that it trained Pangu Pro Moe entirely from scratch using its infrastructure, including the Ascend AI chip, and did not build it on incremental training or reused weights from other brands’ models.
“Huawei strictly adheres to open-source licensing requirements and maintains full independence in the development of its AI models,” the company said. However, it didn’t disclose specific details regarding any sources of pretraining data or frameworks for the code used.

HonestAGI briefly deleted and then reposted a GitHub report that used fingerprinting techniques to evaluate parameter outputs across models. The report gained attention within online AI communities, but some researchers warned that the correlation doesn’t conclusively prove the models were copied, as similarities can result from overlapping model architectures or data structures.
Alibaba has not yet addressed the matter in a public manner.
The incident has ignited the debate over transparency, intellectual property, and the reproducibility of the rapidly changing area of massive AI models, especially when Chinese tech companies compete fiercely within the world AI race.
Learn more about tech news by visiting our Facebook page or participating in the discussion at AtoZTech World.
