From 8d6d1abb1de21522b5577c270ebbd030f7d43e38 Mon Sep 17 00:00:00 2001 From: June Troy Date: Thu, 6 Feb 2025 19:02:21 +0000 Subject: [PATCH] Add 'DeepSeek Open-Sources DeepSeek-R1 LLM with Performance Comparable To OpenAI's O1 Model' --- ...R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md | 2 ++ 1 file changed, 2 insertions(+) create mode 100644 DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md diff --git a/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md new file mode 100644 index 0000000..911558d --- /dev/null +++ b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md @@ -0,0 +1,2 @@ +
[DeepSeek open-sourced](https://letustalk.co.in) DeepSeek-R1, an LLM fine-tuned with reinforcement learning (RL) to [enhance](https://tiktack.socialkhaleel.com) reasoning ability. DeepSeek-R1 attains results on par with [OpenAI's](https://git.randomstar.io) o1 design on several criteria, consisting of MATH-500 and SWE-bench.
+
DeepSeek-R1 is based on DeepSeek-V3, a mix of professionals (MoE) model just recently open-sourced by DeepSeek. This base model is [fine-tuned](https://git.kraft-werk.si) utilizing Group Relative Policy [Optimization](https://49.12.72.229) (GRPO), a reasoning-oriented [variant](https://wiki.roboco.co) of RL. The research study group likewise performed knowledge distillation from DeepSeek-R1 to open-source Qwen and Llama models and released a number of variations of each \ No newline at end of file