From 617ff1ba4d42c58f072ddb323c2068ff22944c13 Mon Sep 17 00:00:00 2001 From: earlenegreenwo Date: Sun, 23 Feb 2025 16:29:22 +0000 Subject: [PATCH] Add 'DeepSeek Open-Sources DeepSeek-R1 LLM with Performance Comparable To OpenAI's O1 Model' --- ...R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md | 2 ++ 1 file changed, 2 insertions(+) create mode 100644 DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md diff --git a/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md new file mode 100644 index 0000000..9cf6850 --- /dev/null +++ b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md @@ -0,0 +1,2 @@ +
DeepSeek open-sourced DeepSeek-R1, an LLM [fine-tuned](http://47.121.132.113000) with reinforcement knowing (RL) to improve thinking capability. DeepSeek-R1 attains results on par with OpenAI's o1 model on numerous standards, [including](https://www.globalshowup.com) MATH-500 and [SWE-bench](http://154.40.47.1873000).
+
DeepSeek-R1 is based upon DeepSeek-V3, a mixture of [experts](https://dev.ncot.uk) (MoE) design just recently open-sourced by [DeepSeek](https://www.fundable.com). This base design is fine-tuned utilizing Group Relative Policy Optimization (GRPO), a reasoning-oriented variant of RL. The research group also performed understanding distillation from DeepSeek-R1 to open-source Qwen and [Llama designs](http://www.tuzh.top3000) and launched numerous versions of each \ No newline at end of file