Jinse Finance reported that on September 17, the DeepSeek-R1 paper was featured as a cover article in Nature, with DeepSeek founder and CEO Liang Wenfeng as the corresponding author. The research team demonstrated through experiments that the reasoning ability of large language models can be improved through pure reinforcement learning, reducing the workload of human input, and outperforming models trained by traditional methods in tasks such as mathematics and programming. DeepSeek-R1 has received 91.1k stars on GitHub and has been praised by developers worldwide. Assistant professors from Carnegie Mellon University and others commented that it has evolved from a powerful but opaque solution seeker to a system capable of human-like dialogue. In an editorial article, Nature affirmed that it is the first mainstream LLM published after peer review, marking a welcome step towards transparency. Peer review helps clarify how LLMs work, assess their effectiveness, and enhance model safety.