diff --git a/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md
new file mode 100644
index 0000000..36d1cb5
--- /dev/null
+++ b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md
@@ -0,0 +1,2 @@
+
DeepSeek open-sourced DeepSeek-R1, an LLM fine-tuned with support knowing (RL) to [enhance reasoning](https://meta.mactan.com.br) [capability](https://gogs.macrotellect.com). DeepSeek-R1 [attains](http://202.164.44.2463000) results on par with [OpenAI's](https://www.refermee.com) o1 model on a number of criteria, including MATH-500 and SWE-bench.
+
DeepSeek-R1 is based on DeepSeek-V3, [wiki.myamens.com](http://wiki.myamens.com/index.php/User:MarylynEsmond) a mix of experts (MoE) design recently open-sourced by DeepSeek. This base design is fine-tuned utilizing Group Relative Policy Optimization (GRPO), a reasoning-oriented version of RL. The research study group likewise performed understanding distillation from DeepSeek-R1 to open-source Qwen and [Llama designs](https://siman.co.il) and [released](http://106.14.140.713000) a number of versions of each
\ No newline at end of file